ResultSUCCESS
Tests 1 failed / 21 succeeded
Started2020-09-19 23:41
Elapsed1h37m
Work namespaceci-op-3kz8n3rl
pod9c08d49e-fad1-11ea-a1fd-0a580a800db2
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 59m30s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
234 error level events were detected during this test run:

Sep 20 00:14:36.152 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-57d98456bd-bvh5c node/ip-10-0-128-186.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): .go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 6947 (13733)\nW0920 00:08:00.351681       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 5579 (13277)\nW0920 00:08:00.351956       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13341 (13390)\nW0920 00:12:17.676001       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 14697 (15823)\nW0920 00:12:51.832090       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 13270 (16039)\nW0920 00:12:52.295098       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 13277 (16047)\nW0920 00:12:52.581797       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 13231 (16051)\nW0920 00:13:12.311958       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13906 (15965)\nW0920 00:13:51.255839       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 13910 (16225)\nW0920 00:14:34.968356       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.KubeControllerManager ended with: too old resource version: 13234 (16639)\nI0920 00:14:35.094129       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0920 00:14:35.094310       1 leaderelection.go:65] leaderelection lost\n
Sep 20 00:14:47.203 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-6cd9c58688-wmtl2 node/ip-10-0-128-186.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): tory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14318 (14427)\nW0920 00:09:51.473712       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 6851 (14427)\nW0920 00:09:51.474445       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Role ended with: too old resource version: 5983 (13927)\nW0920 00:09:51.482782       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 5583 (13921)\nW0920 00:12:17.681730       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 14697 (15823)\nW0920 00:12:51.826702       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 14438 (16039)\nW0920 00:12:53.193911       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Scheduler ended with: too old resource version: 14460 (16059)\nW0920 00:14:34.962040       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.KubeScheduler ended with: too old resource version: 14429 (16639)\nI0920 00:14:46.116234       1 observer_polling.go:78] Observed change: file:/var/run/configmaps/config/config.yaml (current: "1547e95a0f9797cdf0458dff1f179894fccb5742404642939e01ae9281780611", lastKnown: "7169eed1ac64413edbf10ff422c9443cce738894a26e3585f0e6aab2b99c16c9")\nW0920 00:14:46.116277       1 builder.go:108] Restart triggered because of file /var/run/configmaps/config/config.yaml was modified\nF0920 00:14:46.116361       1 leaderelection.go:65] leaderelection lost\nI0920 00:14:46.132207       1 backing_resource_controller.go:148] Shutting down BackingResourceController\nF0920 00:14:46.124991       1 builder.go:217] server exited\n
Sep 20 00:14:57.230 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-6cd9c58688-wmtl2 node/ip-10-0-128-186.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): I0920 00:14:47.541101       1 cmd.go:160] Using service-serving-cert provided certificates\nI0920 00:14:47.541815       1 observer_polling.go:106] Starting file observer\nI0920 00:14:47.959115       1 secure_serving.go:116] Serving securely on 0.0.0.0:8443\nI0920 00:14:47.959942       1 leaderelection.go:205] attempting to acquire leader lease  openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock...\nI0920 00:14:56.190890       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0920 00:14:56.191108       1 leaderelection.go:65] leaderelection lost\n
Sep 20 00:16:10.449 E clusteroperator/cloud-credential changed Degraded to True: CredentialsFailing: 1 of 8 credentials requests are failing to sync.
Sep 20 00:16:31.179 E ns/openshift-machine-api pod/machine-api-operator-59b4994479-kt5bc node/ip-10-0-128-186.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Sep 20 00:16:31.370 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6b5dd676c9-xjjxz node/ip-10-0-128-186.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): e version: 9806 (13268)\nW0920 00:09:51.461527       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13341 (13406)\nW0920 00:12:17.676441       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 14697 (15823)\nW0920 00:12:52.090387       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 13257 (16042)\nW0920 00:12:52.427771       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 13301 (16049)\nW0920 00:12:52.976708       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Project ended with: too old resource version: 13153 (16058)\nW0920 00:15:05.263985       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13514 (13930)\nW0920 00:15:15.425763       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14518 (16778)\nW0920 00:15:31.173221       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 14569 (16904)\nW0920 00:16:08.109701       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftAPIServer ended with: too old resource version: 13268 (17282)\nW0920 00:16:08.166284       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13474 (13931)\nI0920 00:16:30.150221       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0920 00:16:30.150291       1 leaderelection.go:65] leaderelection lost\n
Sep 20 00:19:52.013 E ns/openshift-machine-api pod/machine-api-controllers-75df6f4c8-4fgs6 node/ip-10-0-151-235.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Sep 20 00:19:52.013 E ns/openshift-machine-api pod/machine-api-controllers-75df6f4c8-4fgs6 node/ip-10-0-151-235.us-west-1.compute.internal container=nodelink-controller container exited with code 2 (Error): 
Sep 20 00:20:08.473 E ns/openshift-authentication pod/oauth-openshift-75fbc5c6d5-zb4nw node/ip-10-0-132-71.us-west-1.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:20:11.880 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-6d85b897cc-5shhv node/ip-10-0-128-186.us-west-1.compute.internal container=operator container exited with code 255 (Error): /informers/externalversions/factory.go:101: watch of *v1.OpenShiftControllerManager ended with: too old resource version: 13175 (19777)\nI0920 00:19:57.934627       1 reflector.go:169] Listing and watching *v1.OpenShiftControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0920 00:20:02.090307       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"da504481-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ObservedConfigChanged' Writing updated observed config: {"build":{"buildDefaults":{"resources":{}},"imageTemplateFormat":{"format":"\n\nA: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b7b818fe2a4430d34453e7386cda035ebb46fc7e5062cb1da99f64d2c7730ca"}},"deployer":{"imageTemplateFormat":{"format":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e4625967084a41fb1afb9b998afe2e67e2ede7273e0baadf8e54792dd5a1080"}},"dockerPullSecret":{"internalRegistryHostname":"image-registry.openshift-image-registry.svc:5000"}}\n\nB: registry.svc.ci.openshift.org/ocp/4.2-2020-09-19-194502@sha256:171618b50252a010d5ce992e800378d0c5b97caa07983144ee9fee0d481491d5"}},"deployer":{"imageTemplateFormat":{"format":"registry.svc.ci.openshift.org/ocp/4.2-2020-09-19-194502@sha256:38e946d1bf81d3f200de829f28400df2edddf6eca25875eb43cdc4173501a886"}},"dockerPullSecret":{"internalRegistryHostname":"image-registry.openshift-image-registry.svc:5000"}}\n\nI0920 00:20:11.092107       1 observer_polling.go:78] Observed change: file:/var/run/configmaps/config/config.yaml (current: "1547e95a0f9797cdf0458dff1f179894fccb5742404642939e01ae9281780611", lastKnown: "7169eed1ac64413edbf10ff422c9443cce738894a26e3585f0e6aab2b99c16c9")\nW0920 00:20:11.092248       1 builder.go:108] Restart triggered because of file /var/run/configmaps/config/config.yaml was modified\nF0920 00:20:11.092443       1 leaderelection.go:65] leaderelection lost\n
Sep 20 00:20:22.252 E ns/openshift-monitoring pod/node-exporter-9bb4k node/ip-10-0-141-28.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Sep 20 00:20:22.731 E ns/openshift-monitoring pod/node-exporter-dl9g9 node/ip-10-0-137-51.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 
Sep 20 00:20:23.856 E ns/openshift-cluster-node-tuning-operator pod/tuned-nv6qg node/ip-10-0-137-51.us-west-1.compute.internal container=tuned container exited with code 143 (Error): abels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:12:10.032825   11777 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0920 00:12:10.034177   11777 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:12:10.142622   11777 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0920 00:13:24.374090   11777 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-operator-774d58879f-p76mm) labels changed node wide: true\nI0920 00:13:25.031176   11777 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:13:25.032607   11777 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:13:25.141967   11777 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0920 00:14:19.026264   11777 openshift-tuned.go:691] Lowering resyncPeriod to 67\nI0920 00:16:28.953702   11777 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0920 00:16:28.957032   11777 openshift-tuned.go:720] Pod event watch channel closed.\nI0920 00:16:28.957051   11777 openshift-tuned.go:722] Increasing resyncPeriod to 134\nI0920 00:18:42.957339   11777 openshift-tuned.go:187] Extracting tuned profiles\nI0920 00:18:42.959446   11777 openshift-tuned.go:623] Resync period to pull node/pod labels: 134 [s]\nI0920 00:18:42.974544   11777 openshift-tuned.go:435] Pod (e2e-k8s-sig-apps-deployment-upgrade-6255/dp-57cc5d77b4-65m2x) labels changed node wide: true\nI0920 00:18:47.971737   11777 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:18:47.973313   11777 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0920 00:18:47.974355   11777 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:18:48.082418   11777 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Sep 20 00:20:24.231 E ns/openshift-cluster-node-tuning-operator pod/tuned-ccmkp node/ip-10-0-141-28.us-west-1.compute.internal container=tuned container exited with code 143 (Error): r\n2020-09-20 00:07:06,247 INFO     tuned.daemon.daemon: starting tuning\n2020-09-20 00:07:06,250 INFO     tuned.daemon.controller: terminating controller\n2020-09-20 00:07:06,250 INFO     tuned.daemon.daemon: stopping tuning\n2020-09-20 00:07:06,253 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-09-20 00:07:06,254 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-09-20 00:07:06,256 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-20 00:07:06,258 INFO     tuned.plugins.base: instance disk: assigning devices xvda\n2020-09-20 00:07:06,260 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-09-20 00:07:06,362 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-20 00:07:06,364 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n2020-09-20 00:07:06,373 INFO     tuned.daemon.daemon: terminating Tuned in one-shot mode\nI0920 00:10:23.967868   11615 openshift-tuned.go:435] Pod (e2e-k8s-sig-apps-deployment-upgrade-6255/dp-69c8ff7647-hh767) labels changed node wide: true\nI0920 00:10:25.987268   11615 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:10:25.988689   11615 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:10:26.099493   11615 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0920 00:10:51.529390   11615 openshift-tuned.go:435] Pod (e2e-k8s-sig-apps-deployment-upgrade-6255/dp-69c8ff7647-hh767) labels changed node wide: true\nI0920 00:10:55.987291   11615 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:10:55.989038   11615 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:10:56.097627   11615 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Sep 20 00:20:24.306 E ns/openshift-cluster-node-tuning-operator pod/tuned-mwlft node/ip-10-0-148-225.us-west-1.compute.internal container=tuned container exited with code 143 (Error): enshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:13:49.009164   12601 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:13:49.117581   12601 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0920 00:14:26.998595   12601 openshift-tuned.go:691] Lowering resyncPeriod to 69\nE0920 00:16:28.960979   12601 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""\nE0920 00:16:28.965060   12601 openshift-tuned.go:720] Pod event watch channel closed.\nI0920 00:16:28.965080   12601 openshift-tuned.go:722] Increasing resyncPeriod to 138\nI0920 00:18:46.965281   12601 openshift-tuned.go:187] Extracting tuned profiles\nI0920 00:18:46.967198   12601 openshift-tuned.go:623] Resync period to pull node/pod labels: 138 [s]\nI0920 00:18:46.984825   12601 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-operator-5cd8b4d764-tp5g8) labels changed node wide: true\nI0920 00:18:51.979601   12601 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:18:51.980951   12601 openshift-tuned.go:275] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0920 00:18:51.982071   12601 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:18:52.108729   12601 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0920 00:20:21.098195   12601 openshift-tuned.go:435] Pod (openshift-monitoring/prometheus-operator-5cd8b4d764-tp5g8) labels changed node wide: true\nI0920 00:20:21.979568   12601 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:20:21.981085   12601 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:20:22.122530   12601 openshift-tuned.go:523] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Sep 20 00:20:25.496 E ns/openshift-cluster-node-tuning-operator pod/tuned-s76lb node/ip-10-0-128-186.us-west-1.compute.internal container=tuned container exited with code 143 (Error): er-8b4pz) labels changed node wide: true\nI0920 00:18:25.988271   29049 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:18:25.991090   29049 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:18:26.128548   29049 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0920 00:19:52.975497   29049 openshift-tuned.go:691] Lowering resyncPeriod to 51\nI0920 00:19:59.389709   29049 openshift-tuned.go:435] Pod (openshift-monitoring/cluster-monitoring-operator-55d7766ddc-2hcpq) labels changed node wide: true\nI0920 00:20:00.988226   29049 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:20:00.989932   29049 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:20:01.162906   29049 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0920 00:20:01.436943   29049 openshift-tuned.go:435] Pod (openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-d66684796-hh5x8) labels changed node wide: true\nI0920 00:20:05.988217   29049 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:20:05.990120   29049 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:20:06.310156   29049 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0920 00:20:08.564340   29049 openshift-tuned.go:435] Pod (openshift-image-registry/cluster-image-registry-operator-7fd6588f58-9ntq2) labels changed node wide: true\nI0920 00:20:10.988207   29049 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:20:10.994496   29049 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:20:11.229148   29049 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Sep 20 00:20:29.456 E ns/openshift-cluster-node-tuning-operator pod/tuned-zjlsx node/ip-10-0-132-71.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 14472   20742 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:20:01.316420   20742 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:20:01.520483   20742 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0920 00:20:02.034442   20742 openshift-tuned.go:435] Pod (openshift-authentication/oauth-openshift-75fbc5c6d5-zb4nw) labels changed node wide: true\nI0920 00:20:06.314716   20742 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:20:06.319163   20742 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:20:06.507426   20742 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0920 00:20:09.408393   20742 openshift-tuned.go:435] Pod (openshift-kube-controller-manager/revision-pruner-8-ip-10-0-132-71.us-west-1.compute.internal) labels changed node wide: false\nI0920 00:20:12.187795   20742 openshift-tuned.go:435] Pod (openshift-ingress-operator/ingress-operator-786cccddb6-tldld) labels changed node wide: true\nI0920 00:20:16.315292   20742 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:20:16.360297   20742 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:20:16.931909   20742 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0920 00:20:18.112415   20742 openshift-tuned.go:435] Pod (openshift-authentication/oauth-openshift-75fbc5c6d5-zb4nw) labels changed node wide: true\nI0920 00:20:21.314466   20742 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:20:21.316378   20742 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:20:21.528177   20742 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Sep 20 00:20:30.652 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-68646766fb-94bbs node/ip-10-0-128-186.us-west-1.compute.internal container=manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:20:30.800 E ns/openshift-cluster-node-tuning-operator pod/tuned-blwxw node/ip-10-0-151-235.us-west-1.compute.internal container=tuned container exited with code 143 (Error): match.  Label changes will not trigger profile reload.\nI0920 00:16:11.571032   28368 openshift-tuned.go:435] Pod (openshift-machine-api/machine-api-operator-9bfd7cf79-l48z9) labels changed node wide: true\nI0920 00:16:13.027630   28368 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:16:13.029257   28368 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:16:13.154921   28368 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0920 00:19:00.772533   28368 openshift-tuned.go:435] Pod (openshift-apiserver/apiserver-9hswx) labels changed node wide: true\nI0920 00:19:03.027630   28368 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:19:03.029557   28368 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:19:03.153607   28368 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0920 00:19:16.513398   28368 openshift-tuned.go:435] Pod (openshift-kube-scheduler/installer-6-ip-10-0-151-235.us-west-1.compute.internal) labels changed node wide: false\nI0920 00:19:26.607128   28368 openshift-tuned.go:435] Pod (openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-151-235.us-west-1.compute.internal) labels changed node wide: true\nI0920 00:19:28.027658   28368 openshift-tuned.go:293] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0920 00:19:28.029310   28368 openshift-tuned.go:326] Getting recommended profile...\nI0920 00:19:28.165130   28368 openshift-tuned.go:523] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0920 00:19:48.727026   28368 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0920 00:19:48.728874   28368 openshift-tuned.go:720] Pod event watch channel closed.\nI0920 00:19:48.728898   28368 openshift-tuned.go:722] Increasing resyncPeriod to 132\n
Sep 20 00:20:35.589 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-6d9c9ggnz node/ip-10-0-132-71.us-west-1.compute.internal container=operator container exited with code 2 (Error): hift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.ServiceCatalogControllerManager ended with: too old resource version: 17565 (19575)\nI0920 00:19:49.828098       1 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132\nI0920 00:19:49.834906       1 reflector.go:169] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:132\nI0920 00:19:49.851832       1 reflector.go:169] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:132\nI0920 00:19:49.862532       1 reflector.go:169] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:132\nI0920 00:19:49.921509       1 reflector.go:169] Listing and watching *v1.ServiceCatalogControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0920 00:19:52.164255       1 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync\nI0920 00:19:52.319860       1 wrap.go:47] GET /metrics: (7.317104ms) 200 [Prometheus/2.7.2 10.129.2.8:56808]\nI0920 00:19:52.369227       1 wrap.go:47] GET /metrics: (18.363589ms) 200 [Prometheus/2.7.2 10.128.2.7:54712]\nI0920 00:19:56.803534       1 reflector.go:357] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: Watch close - *v1.ServiceCatalogControllerManager total 0 items received\nW0920 00:19:56.811447       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.ServiceCatalogControllerManager ended with: too old resource version: 19575 (19754)\nI0920 00:19:57.811720       1 reflector.go:169] Listing and watching *v1.ServiceCatalogControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0920 00:20:22.407049       1 wrap.go:47] GET /metrics: (78.395445ms) 200 [Prometheus/2.7.2 10.128.2.7:55792]\nI0920 00:20:22.432363       1 wrap.go:47] GET /metrics: (2.754482ms) 200 [Prometheus/2.7.2 10.129.2.8:58994]\n
Sep 20 00:20:39.139 E ns/openshift-authentication-operator pod/authentication-operator-6498b57c5-vwlcx node/ip-10-0-132-71.us-west-1.compute.internal container=operator container exited with code 255 (Error): -a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("Progressing: deployment's observed generation did not reach the expected generation")\nI0920 00:20:05.646992       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-20T00:09:55Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-20T00:20:02Z","message":"Progressing: not all deployment replicas are ready","reason":"ProgressingOAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-09-20T00:09:55Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-20T00:03:09Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0920 00:20:05.682098       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"e239873c-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "Progressing: deployment's observed generation did not reach the expected generation" to "Progressing: not all deployment replicas are ready"\nI0920 00:20:08.350486       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"e239873c-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed\nI0920 00:20:38.133722       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0920 00:20:38.133786       1 leaderelection.go:65] leaderelection lost\n
Sep 20 00:20:44.182 E ns/openshift-operator-lifecycle-manager pod/olm-operators-j6w2l node/ip-10-0-151-235.us-west-1.compute.internal container=configmap-registry-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:20:48.376 E ns/openshift-console-operator pod/console-operator-5bb677dfbb-t7gzp node/ip-10-0-151-235.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): d with: too old resource version: 14091 (20047)\ntime="2020-09-20T00:20:07Z" level=info msg="started syncing operator \"cluster\" (2020-09-20 00:20:07.54792707 +0000 UTC m=+977.557174983)"\ntime="2020-09-20T00:20:07Z" level=info msg="console is in a managed state."\ntime="2020-09-20T00:20:07Z" level=info msg="running sync loop 4.0.0"\ntime="2020-09-20T00:20:07Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-3kz8n3rl-eb227.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-09-20T00:20:07Z" level=info msg="service-ca configmap exists and is in the correct state"\ntime="2020-09-20T00:20:07Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-3kz8n3rl-eb227.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-09-20T00:20:07Z" level=info msg=-----------------------\ntime="2020-09-20T00:20:07Z" level=info msg="sync loop 4.0.0 resources updated: false \n"\ntime="2020-09-20T00:20:07Z" level=info msg=-----------------------\ntime="2020-09-20T00:20:07Z" level=info msg="deployment is available, ready replicas: 2 \n"\ntime="2020-09-20T00:20:07Z" level=info msg="sync_v400: updating console status"\ntime="2020-09-20T00:20:07Z" level=info msg="route ingress 'default' found and admitted, host: console-openshift-console.apps.ci-op-3kz8n3rl-eb227.origin-ci-int-aws.dev.rhcloud.com \n"\ntime="2020-09-20T00:20:07Z" level=info msg="sync loop 4.0.0 complete"\ntime="2020-09-20T00:20:07Z" level=info msg="finished syncing operator \"cluster\" (47.494µs) \n\n"\nI0920 00:20:41.210194       1 observer_polling.go:78] Observed change: file:/var/run/configmaps/config/controller-config.yaml (current: "41adc4f67c9486b2d108e39abc8b009b458b16fccaa6c860984b7a3410299dff", lastKnown: "9a9f171084db3a5d8481509ac5a988ce2000ef99d0a85c83a40458eb32dc4bbc")\nW0920 00:20:41.210235       1 builder.go:108] Restart triggered because of file /var/run/configmaps/config/controller-config.yaml was modified\nF0920 00:20:41.210297       1 leaderelection.go:65] leaderelection lost\n
Sep 20 00:21:14.721 E ns/openshift-monitoring pod/telemeter-client-7f468bb874-g7x7v node/ip-10-0-137-51.us-west-1.compute.internal container=reload container exited with code 2 (Error): 
Sep 20 00:21:14.721 E ns/openshift-monitoring pod/telemeter-client-7f468bb874-g7x7v node/ip-10-0-137-51.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Sep 20 00:21:17.147 E ns/openshift-marketplace pod/community-operators-7d9b68db7c-lg5s4 node/ip-10-0-141-28.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Sep 20 00:21:26.388 E ns/openshift-ingress pod/router-default-84ccbfb99f-7pnf6 node/ip-10-0-141-28.us-west-1.compute.internal container=router container exited with code 2 (Error): 00:20:12.936721       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:20:17.936521       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:20:22.936325       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:20:27.999108       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:20:32.995552       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:20:37.997170       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:20:43.003987       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:20:48.025693       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:00.809680       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:05.803297       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:10.807175       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:15.837577       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:20.794724       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Sep 20 00:21:36.671 E ns/openshift-operator-lifecycle-manager pod/packageserver-97f94c97-qxgbg node/ip-10-0-128-186.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:21:42.480 E ns/openshift-monitoring pod/kube-state-metrics-5446b9585-k28zs node/ip-10-0-141-28.us-west-1.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Sep 20 00:21:52.743 E clusteroperator/monitoring changed Degraded to True: UpdatingPrometheusOperatorFailed: Failed to rollout the stack. Error: running task Updating Prometheus Operator failed: reconciling Prometheus Operator Deployment failed: updating deployment object failed: Deployment.apps "prometheus-operator" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"prometheus-operator", "app.kubernetes.io/component":"controller"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Sep 20 00:21:53.567 E ns/openshift-monitoring pod/prometheus-adapter-7b495bccfc-llm4c node/ip-10-0-141-28.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): 
Sep 20 00:21:54.676 E ns/openshift-monitoring pod/node-exporter-sh5rq node/ip-10-0-148-225.us-west-1.compute.internal container=node-exporter container exited with code 2 (Error): rometheus/node_exporter/collector/collector.go:117 +0x109\n\ngoroutine 113 [runnable]:\ngithub.com/prometheus/node_exporter/collector.NodeCollector.Collect.func1(0xc000102600, 0xc00013a6c0, 0xa4ef77, 0x8, 0xaf0e00, 0xc000193680)\n	/go/src/github.com/prometheus/node_exporter/collector/collector.go:117\ncreated by github.com/prometheus/node_exporter/collector.NodeCollector.Collect\n	/go/src/github.com/prometheus/node_exporter/collector/collector.go:117 +0x109\n\ngoroutine 114 [runnable]:\ngithub.com/prometheus/node_exporter/collector.NodeCollector.Collect.func1(0xc000102600, 0xc00013a6c0, 0xa4a9fc, 0x3, 0xaf0b00, 0xc00000c4d0)\n	/go/src/github.com/prometheus/node_exporter/collector/collector.go:117\ncreated by github.com/prometheus/node_exporter/collector.NodeCollector.Collect\n	/go/src/github.com/prometheus/node_exporter/collector/collector.go:117 +0x109\n\ngoroutine 115 [runnable]:\ngithub.com/prometheus/node_exporter/collector.NodeCollector.Collect.func1(0xc000102600, 0xc00013a6c0, 0xa4dc6a, 0x7, 0xaf0c40, 0xc00000c4d8)\n	/go/src/github.com/prometheus/node_exporter/collector/collector.go:117\ncreated by github.com/prometheus/node_exporter/collector.NodeCollector.Collect\n	/go/src/github.com/prometheus/node_exporter/collector/collector.go:117 +0x109\n\ngoroutine 116 [runnable]:\ngithub.com/prometheus/node_exporter/collector.NodeCollector.Collect.func1(0xc000102600, 0xc00013a6c0, 0xa4e0ae, 0x7, 0xaf0e40, 0xc00000c4b0)\n	/go/src/github.com/prometheus/node_exporter/collector/collector.go:117\ncreated by github.com/prometheus/node_exporter/collector.NodeCollector.Collect\n	/go/src/github.com/prometheus/node_exporter/collector/collector.go:117 +0x109\n\ngoroutine 117 [runnable]:\ngithub.com/prometheus/node_exporter/collector.NodeCollector.Collect.func1(0xc000102600, 0xc00013a6c0, 0xa4f916, 0x9, 0xaf0b80, 0xc0001a9ec0)\n	/go/src/github.com/prometheus/node_exporter/collector/collector.go:117\ncreated by github.com/prometheus/node_exporter/collector.NodeCollector.Collect\n	/go/src/github.com/prometheus/node_exporter/collector/collector.go:117 +0x109\n
Sep 20 00:21:54.690 E ns/openshift-ingress pod/router-default-84ccbfb99f-mm64p node/ip-10-0-148-225.us-west-1.compute.internal container=router container exited with code 2 (Error): 00:20:42.993130       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:20:47.992450       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:00.806234       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:05.790450       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:10.834702       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:15.835655       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:20.811275       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:25.803940       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:30.818385       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:35.788782       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:40.828530       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:45.808499       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0920 00:21:50.791034       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Sep 20 00:21:57.374 E ns/openshift-operator-lifecycle-manager pod/packageserver-97f94c97-r88sl node/ip-10-0-151-235.us-west-1.compute.internal container=packageserver container exited with code 137 (Error): 
Sep 20 00:22:24.942 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-225.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): 
Sep 20 00:22:24.942 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-225.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 
Sep 20 00:22:24.942 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-225.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 
Sep 20 00:22:29.714 E ns/openshift-service-ca pod/service-serving-cert-signer-865648474d-4vkpc node/ip-10-0-151-235.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 2 (Error): 
Sep 20 00:22:30.060 E ns/openshift-service-ca pod/apiservice-cabundle-injector-7b45d6d55b-6bdx5 node/ip-10-0-151-235.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
Sep 20 00:22:30.795 E ns/openshift-console pod/downloads-5c59f467dc-fcz74 node/ip-10-0-132-71.us-west-1.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:22:46.954 E ns/openshift-controller-manager pod/controller-manager-jb56x node/ip-10-0-151-235.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Sep 20 00:22:54.045 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-51.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 1 (Error): 
Sep 20 00:22:59.084 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-225.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): =info ts=2020-09-20T00:22:48.371Z caller=main.go:332 fd_limits="(soft=1048576, hard=1048576)"\nlevel=info ts=2020-09-20T00:22:48.371Z caller=main.go:333 vm_limits="(soft=unlimited, hard=unlimited)"\nlevel=info ts=2020-09-20T00:22:48.372Z caller=main.go:652 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T00:22:48.372Z caller=web.go:448 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T00:22:48.376Z caller=main.go:667 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T00:22:48.377Z caller=main.go:668 msg="TSDB started"\nlevel=info ts=2020-09-20T00:22:48.377Z caller=main.go:738 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T00:22:48.377Z caller=main.go:521 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T00:22:48.377Z caller=main.go:535 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T00:22:48.377Z caller=main.go:557 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T00:22:48.377Z caller=main.go:531 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T00:22:48.377Z caller=main.go:551 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T00:22:48.377Z caller=main.go:517 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T00:22:48.377Z caller=manager.go:776 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T00:22:48.377Z caller=manager.go:782 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T00:22:48.378Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T00:22:48.378Z caller=main.go:722 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20T00:22:48.378Z caller=main.go:731 err="error loading config from \"/etc/prometheus/config_out/prometheus.env.yaml\": couldn't load configuration (--config.file=\"/etc/prometheus/config_out/prometheus.env.yaml\"): open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 20 00:23:05.017 E ns/openshift-console pod/downloads-5c59f467dc-4gmmh node/ip-10-0-151-235.us-west-1.compute.internal container=download-server container exited with code 137 (Error): 
Sep 20 00:23:09.932 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-51.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 1 (Error): 
Sep 20 00:23:12.589 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-51.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:23:12.589 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-51.us-west-1.compute.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:23:12.589 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-51.us-west-1.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:23:12.589 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-51.us-west-1.compute.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:23:12.589 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-51.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:23:12.589 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-51.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:23:42.047 E ns/openshift-controller-manager pod/controller-manager-7rgpb node/ip-10-0-132-71.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Sep 20 00:23:46.588 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-51.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): =info ts=2020-09-20T00:23:37.575Z caller=main.go:332 fd_limits="(soft=1048576, hard=1048576)"\nlevel=info ts=2020-09-20T00:23:37.575Z caller=main.go:333 vm_limits="(soft=unlimited, hard=unlimited)"\nlevel=info ts=2020-09-20T00:23:37.577Z caller=main.go:652 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T00:23:37.577Z caller=web.go:448 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T00:23:37.582Z caller=main.go:667 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T00:23:37.582Z caller=main.go:668 msg="TSDB started"\nlevel=info ts=2020-09-20T00:23:37.582Z caller=main.go:738 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T00:23:37.582Z caller=main.go:521 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T00:23:37.582Z caller=main.go:535 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T00:23:37.582Z caller=main.go:557 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T00:23:37.582Z caller=main.go:531 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T00:23:37.582Z caller=main.go:517 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T00:23:37.582Z caller=manager.go:776 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T00:23:37.582Z caller=manager.go:782 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T00:23:37.582Z caller=main.go:551 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T00:23:37.583Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T00:23:37.583Z caller=main.go:722 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20T00:23:37.583Z caller=main.go:731 err="error loading config from \"/etc/prometheus/config_out/prometheus.env.yaml\": couldn't load configuration (--config.file=\"/etc/prometheus/config_out/prometheus.env.yaml\"): open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 20 00:24:19.261 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-fdbfggvsx node/ip-10-0-132-71.us-west-1.compute.internal container=operator container exited with code 255 (Error): oxy ended with: too old resource version: 16029 (21356)\nW0920 00:21:53.511906       1 reflector.go:289] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.ServiceCatalogControllerManager ended with: too old resource version: 22445 (22554)\nI0920 00:21:54.363129       1 reflector.go:161] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0920 00:21:54.512108       1 reflector.go:161] Listing and watching *v1.ServiceCatalogControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0920 00:22:13.061549       1 wrap.go:47] GET /metrics: (26.255989ms) 200 [Prometheus/2.7.2 10.128.2.7:48132]\nI0920 00:22:13.171274       1 wrap.go:47] GET /metrics: (144.030519ms) 200 [Prometheus/2.7.2 10.129.2.8:60590]\nI0920 00:22:43.034691       1 wrap.go:47] GET /metrics: (6.787898ms) 200 [Prometheus/2.7.2 10.128.2.7:48132]\nI0920 00:23:24.397620       1 wrap.go:47] GET /metrics: (6.231037ms) 200 [Prometheus/2.11.2 10.129.2.21:58602]\nI0920 00:23:54.388055       1 wrap.go:47] GET /metrics: (6.314885ms) 200 [Prometheus/2.11.2 10.129.2.21:58602]\nI0920 00:24:14.999813       1 wrap.go:47] GET /metrics: (7.143198ms) 200 [Prometheus/2.11.2 10.128.2.19:33432]\nI0920 00:24:18.776454       1 observer_polling.go:78] Observed change: file:/var/run/secrets/serving-cert/tls.crt (current: "2c1150f91f268e8380c66035826a968e50083c93a4deb1405736970bee59a21e", lastKnown: "a942b146089ecf9ac18e8dc50bad30cc238035a3873a6942333b40214fa05184")\nW0920 00:24:18.776597       1 builder.go:108] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified\nI0920 00:24:18.788317       1 observer_polling.go:78] Observed change: file:/var/run/secrets/serving-cert/tls.key (current: "800961b20a09bfc5d62fb228b919db94bc7bc32deef94cd21aefe2d8f5754198", lastKnown: "3d022c649eaba1e3cc94b57df8a5261d225e9c66c70275e8be70c2ad0c3efb6c")\nF0920 00:24:18.788451       1 leaderelection.go:66] leaderelection lost\n
Sep 20 00:24:28.264 E ns/openshift-authentication-operator pod/authentication-operator-b5d58cb74-b7262 node/ip-10-0-151-235.us-west-1.compute.internal container=operator container exited with code 255 (Error): lusteroperator/authentication changed: Progressing message changed from "Progressing: deployment's observed generation did not reach the expected generation" to "Progressing: not all deployment replicas are ready"\nI0920 00:24:20.403476       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"e239873c-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed\nI0920 00:24:27.852175       1 observer_polling.go:88] Observed change: file:/var/run/secrets/serving-cert/tls.crt (current: "5dcde8323d7d6a513892c4b03ccac328ad6dddc8b0447f44ce24d53fe2457df0", lastKnown: "667acfa277194e80045d7c37f7615316d10815761345a6c9dba6a0835f151c4b")\nW0920 00:24:27.852215       1 builder.go:108] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified\nI0920 00:24:27.852331       1 observer_polling.go:88] Observed change: file:/var/run/secrets/serving-cert/tls.key (current: "29d538c12723eb0622f1f4349df9343b32eef341bfeb8545b779cdeae40a2b74", lastKnown: "9e5d21391c700158c19d8a77f42375223ec65722aeb6dd18fdb99c4f77893c06")\nF0920 00:24:27.852429       1 leaderelection.go:66] leaderelection lost\nI0920 00:24:27.864532       1 controller.go:70] Shutting down AuthenticationOperator2\nI0920 00:24:27.855577       1 unsupportedconfigoverrides_controller.go:161] Shutting down UnsupportedConfigOverridesController\nI0920 00:24:27.864566       1 resourcesync_controller.go:227] Shutting down ResourceSyncController\nI0920 00:24:27.864570       1 management_state_controller.go:111] Shutting down management-state-controller-authentication\nI0920 00:24:27.864580       1 status_controller.go:201] Shutting down StatusSyncer-authentication\nI0920 00:24:27.864594       1 remove_stale_conditions.go:83] Shutting down RemoveStaleConditions\nF0920 00:24:27.864539       1 builder.go:217] server exited\n
Sep 20 00:24:36.354 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-5d5dcc5cb7-tmwv7 node/ip-10-0-132-71.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): :35.831151       1 observer_polling.go:114] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="9b81ad2f278beec2af4027abc980b5653bc3b6c650ae9e59dcbc79ddba3a7e32", new="23d04ce73c5557e1fcc6ed29682eb7f8b51354b374a476d5495222de244434c2")\nF0920 00:24:35.831180       1 leaderelection.go:66] leaderelection lost\nI0920 00:24:35.851317       1 client_cert_rotation_controller.go:179] Shutting down CertRotationController - "LocalhostServing"\nI0920 00:24:35.851354       1 client_cert_rotation_controller.go:179] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0920 00:24:35.851370       1 client_cert_rotation_controller.go:179] Shutting down CertRotationController - "KubeSchedulerClient"\nI0920 00:24:35.851386       1 client_cert_rotation_controller.go:179] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0920 00:24:35.851402       1 client_cert_rotation_controller.go:179] Shutting down CertRotationController - "ServiceNetworkServing"\nI0920 00:24:35.851414       1 client_cert_rotation_controller.go:179] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0920 00:24:35.851428       1 client_cert_rotation_controller.go:179] Shutting down CertRotationController - "KubeAPIServerCertSyncer"\nI0920 00:24:35.851444       1 client_cert_rotation_controller.go:179] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0920 00:24:35.851458       1 client_cert_rotation_controller.go:179] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0920 00:24:35.851471       1 monitoring_resource_controller.go:172] Shutting down MonitoringResourceController\nI0920 00:24:35.851479       1 secure_serving.go:160] Stopped listening on 0.0.0.0:8443\nI0920 00:24:35.851484       1 backing_resource_controller.go:148] Shutting down BackingResourceController\nI0920 00:24:35.851499       1 revision_controller.go:349] Shutting down RevisionController\nI0920 00:24:35.851512       1 config_observer_controller.go:159] Shutting down ConfigObserver\n
Sep 20 00:24:37.331 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-699dbdfcb9-sjx4m node/ip-10-0-151-235.us-west-1.compute.internal container=operator container exited with code 255 (Error): onSet.apps/controller-manager -n openshift-controller-manager because it changed\nI0920 00:24:25.431096       1 status_controller.go:165] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-19T23:59:24Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-20T00:24:25Z","message":"Progressing: daemonset/controller-manager: observed generation is 10, desired generation is 11.","reason":"ProgressingDesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-09-20T00:02:57Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-19T23:59:24Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0920 00:24:25.439261       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"da504481-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: daemonset/controller-manager: observed generation is 10, desired generation is 11.")\nI0920 00:24:36.816582       1 observer_polling.go:78] Observed change: file:/var/run/secrets/serving-cert/tls.crt (current: "9502685d493e25f8599eae2bf75950d3acd0e71411570bd9c0b2b68cab7f407e", lastKnown: "1a421b277b5ce96610293068f03a07885741824f8f57243a54cf6a95d9dd9783")\nW0920 00:24:36.816634       1 builder.go:108] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified\nI0920 00:24:36.816786       1 observer_polling.go:78] Observed change: file:/var/run/secrets/serving-cert/tls.key (current: "0be32f9d3634c2363005556fde28d0a8d4d2b8e4c546255f146a7d98f90360dd", lastKnown: "1e1d6161439886cca9e95dfc2acba917af368ea0fb0d7edcfaae4683df8d6012")\nF0920 00:24:36.816832       1 leaderelection.go:66] leaderelection lost\n
Sep 20 00:24:45.261 E ns/openshift-console pod/console-fdb4c7dbb-pfklm node/ip-10-0-128-186.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/09/20 00:07:45 cmd/main: cookies are secure!\n2020/09/20 00:07:45 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/20 00:07:55 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/20 00:08:05 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/20 00:08:15 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-3kz8n3rl-eb227.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-3kz8n3rl-eb227.origin-ci-int-aws.dev.rhcloud.com: dial tcp: lookup oauth-openshift.apps.ci-op-3kz8n3rl-eb227.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: no such host\n2020/09/20 00:08:25 cmd/main: Binding to 0.0.0.0:8443...\n2020/09/20 00:08:25 cmd/main: using TLS\n2020/09/20 00:24:23 http: TLS handshake error from 10.131.0.18:52454: remote error: tls: error decrypting message\n2020/09/20 00:24:28 http: TLS handshake error from 10.131.0.18:52502: remote error: tls: error decrypting message\n2020/09/20 00:24:33 http: TLS handshake error from 10.131.0.18:52568: remote error: tls: error decrypting message\n2020/09/20 00:24:38 http: TLS handshake error from 10.128.2.16:53538: remote error: tls: error decrypting message\n2020/09/20 00:24:38 http: TLS handshake error from 10.131.0.18:52614: remote error: tls: error decrypting message\n2020/09/20 00:24:43 http: TLS handshake error from 10.128.2.16:53596: remote error: tls: error decrypting message\n2020/09/20 00:24:43 http: TLS handshake error from 10.131.0.18:52676: remote error: tls: error decrypting message\n
Sep 20 00:24:51.375 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7d8fcd75f9-4q6w2 node/ip-10-0-151-235.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): ":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0920 00:24:41.525819       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"da17c032-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: nodes/ip-10-0-128-186.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-128-186.us-west-1.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0920 00:24:41.707412       1 installer_controller.go:331] "ip-10-0-128-186.us-west-1.compute.internal" is in transition to 7, but has not made progress because static pod is pending\nI0920 00:24:44.508512       1 installer_controller.go:331] "ip-10-0-128-186.us-west-1.compute.internal" is in transition to 7, but has not made progress because static pod is pending\nI0920 00:24:46.308672       1 installer_controller.go:331] "ip-10-0-128-186.us-west-1.compute.internal" is in transition to 7, but has not made progress because static pod is pending\nI0920 00:24:51.135366       1 observer_polling.go:114] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="c8031ee0c3c8d2cbf3452783c8aeb10a496fce401d86beb8c5d202648028eb8a", new="6a1baa48fbe79d4d43734f780d3b4cab5b00c8580bb01fbfd07c4547a7eae20b")\nW0920 00:24:51.135402       1 builder.go:108] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified\nI0920 00:24:51.135472       1 observer_polling.go:114] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="98def76aec58889061015588a1ad33903a2e3716ad7afc248b7a954486b0a868", new="931434c53388b8be6fa98cf8c52e0614866ebc5a517ea752f8edbb68753cde9e")\nF0920 00:24:51.135488       1 leaderelection.go:66] leaderelection lost\n
Sep 20 00:24:56.574 E ns/openshift-controller-manager pod/controller-manager-qbw44 node/ip-10-0-132-71.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Sep 20 00:25:00.600 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-6799d8697f-lc52v node/ip-10-0-132-71.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): e achieved new revision 10"\nI0920 00:24:38.845146       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"da0137a4-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-10-ip-10-0-128-186.us-west-1.compute.internal -n openshift-kube-controller-manager because it was missing\nI0920 00:24:53.462964       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"da0137a4-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: nodes/ip-10-0-128-186.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-128-186.us-west-1.compute.internal container=\"kube-controller-manager-10\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0920 00:25:00.456077       1 observer_polling.go:114] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="7e340a9cf3a4700f604f7edb3822faa39bd575ef25c41f5527e2c044c8878aa0", new="92739d13f2bf4aedbb4d7b6af197a11655fca522874b309c47e98909700efd22")\nW0920 00:25:00.456141       1 builder.go:108] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified\nI0920 00:25:00.456209       1 observer_polling.go:114] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="ab7c13488dbebc052ee9b5240d042f1f50b141e265aac4360829161bc1bc486e", new="0d21e97ed220ea7e0ae964723e67ac1628c0accbd446719223e5b9468b10ec15")\nF0920 00:25:00.456265       1 leaderelection.go:66] leaderelection lost\nI0920 00:25:00.471194       1 installer_controller.go:860] Shutting down InstallerController\n
Sep 20 00:25:05.975 E ns/openshift-console pod/console-fdb4c7dbb-7k8sc node/ip-10-0-132-71.us-west-1.compute.internal container=console container exited with code 2 (Error): ud.com: x509: certificate signed by unknown authority\n2020/09/20 00:07:13 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://172.30.0.1:443/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/20 00:07:23 cmd/main: Binding to 0.0.0.0:8443...\n2020/09/20 00:07:23 cmd/main: using TLS\n2020/09/20 00:24:24 http: TLS handshake error from 10.131.0.18:41018: remote error: tls: error decrypting message\n2020/09/20 00:24:29 http: TLS handshake error from 10.131.0.18:41064: remote error: tls: error decrypting message\n2020/09/20 00:24:34 http: TLS handshake error from 10.131.0.18:41132: remote error: tls: error decrypting message\n2020/09/20 00:24:39 http: TLS handshake error from 10.128.2.16:60184: remote error: tls: error decrypting message\n2020/09/20 00:24:39 http: TLS handshake error from 10.131.0.18:41176: remote error: tls: error decrypting message\n2020/09/20 00:24:44 http: TLS handshake error from 10.128.2.16:60256: remote error: tls: error decrypting message\n2020/09/20 00:24:44 http: TLS handshake error from 10.131.0.18:41240: remote error: tls: error decrypting message\n2020/09/20 00:24:49 http: TLS handshake error from 10.128.2.16:60308: remote error: tls: error decrypting message\n2020/09/20 00:24:49 http: TLS handshake error from 10.131.0.18:41280: remote error: tls: error decrypting message\n2020/09/20 00:24:54 http: TLS handshake error from 10.128.2.16:60386: remote error: tls: error decrypting message\n2020/09/20 00:24:54 http: TLS handshake error from 10.131.0.18:41344: remote error: tls: error decrypting message\n2020/09/20 00:24:59 http: TLS handshake error from 10.128.2.16:60432: remote error: tls: error decrypting message\n2020/09/20 00:24:59 http: TLS handshake error from 10.131.0.18:41384: remote error: tls: error decrypting message\n2020/09/20 00:25:04 http: TLS handshake error from 10.128.2.16:60512: remote error: tls: error decrypting message\n2020/09/20 00:25:04 http: TLS handshake error from 10.131.0.18:41448: remote error: tls: error decrypting message\n
Sep 20 00:25:09.653 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-7b49db56f8-v5c9k node/ip-10-0-132-71.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): go:101: watch of *v1.Ingress ended with: too old resource version: 16049 (21564)\nW0920 00:21:53.404583       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 18237 (22010)\nW0920 00:21:53.404596       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Secret ended with: too old resource version: 17700 (18340)\nW0920 00:21:53.404679       1 reflector.go:289] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 16042 (20691)\nW0920 00:21:53.405535       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Secret ended with: too old resource version: 17700 (18340)\nW0920 00:21:54.128708       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ServiceAccount ended with: too old resource version: 17357 (18342)\nW0920 00:21:54.147649       1 reflector.go:289] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: watch of *unstructured.Unstructured ended with: too old resource version: 18288 (21721)\nW0920 00:21:54.162175       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 18237 (22036)\nI0920 00:25:09.315478       1 observer_polling.go:78] Observed change: file:/var/run/secrets/serving-cert/tls.crt (current: "ee19bc0b0c3e62605c649d7b980cdea5343f4ee59e85c19052fe0cecd11ba620", lastKnown: "3b96f4cb88af08de3e2b067306ab1e8cb36c2d5a80b86084ffef5a45d053bd09")\nW0920 00:25:09.315523       1 builder.go:108] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified\nI0920 00:25:09.315595       1 observer_polling.go:78] Observed change: file:/var/run/secrets/serving-cert/tls.key (current: "afbccea675e994ef2c46b33560b2f05e4cdb0bf6ea59cf73948d8c21b9b87731", lastKnown: "66befdcd3b8ffb478bd6b01d92bbca2a6a7d7d7718c35f0d554e36ca25e17155")\nF0920 00:25:09.315600       1 leaderelection.go:66] leaderelection lost\n
Sep 20 00:25:15.383 E ns/openshift-console-operator pod/console-operator-8577795fd-hkzxq node/ip-10-0-128-186.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): 09-19-194502\nE0920 00:24:44.763197       1 status.go:71] SyncLoopRefreshProgressing InProgress Working toward version 4.2.0-0.ci-2020-09-19-194502\nE0920 00:24:44.763230       1 status.go:71] DeploymentAvailable FailedUpdate 2 replicas ready at version 4.2.0-0.ci-2020-09-19-194502\nI0920 00:25:04.690522       1 status_controller.go:165] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-20T00:24:05Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-20T00:25:04Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-20T00:25:04Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-20T00:03:52Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0920 00:25:04.698503       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"addbbaf4-fad4-11ea-9e3a-0654f12b702f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False (""),Available changed from False to True ("")\nI0920 00:25:14.701615       1 observer_polling.go:78] Observed change: file:/var/run/secrets/serving-cert/tls.crt (current: "724f8ef588e40cb1bea92a9cc5b0fb885abe799dd74f9cde4cf3676ee51ca849", lastKnown: "e3d28a608c5f8eef511ad0c87c2637547e44d0fff6622d00407a3353c7583b64")\nW0920 00:25:14.701654       1 builder.go:108] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified\nI0920 00:25:14.701727       1 observer_polling.go:78] Observed change: file:/var/run/secrets/serving-cert/tls.key (current: "b4b6f7a593daecf9b3a5ab27c1a918c7277343b3fc1b4dd9855cc449c1eb9916", lastKnown: "dab98dff5d4798ba6b034ce4df118f710d70a11d1a99c08d055fe24f6bd2cc16")\nF0920 00:25:14.701740       1 leaderelection.go:66] leaderelection lost\nF0920 00:25:14.701756       1 builder.go:217] server exited\n
Sep 20 00:25:23.408 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-c5b95d755-tkzf8 node/ip-10-0-128-186.us-west-1.compute.internal container=operator container exited with code 255 (Error): og-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0920 00:24:36.573430       1 request.go:1145] body was not decodable (unable to check for Status): Object 'Kind' is missing in '{\n  "paths": [\n    "/apis",\n    "/metrics",\n    "/version"\n  ]\n}'\nI0920 00:24:36.573460       1 workload_controller.go:325] No service bindings found, nothing to delete.\nI0920 00:24:36.580627       1 workload_controller.go:179] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0920 00:24:44.164715       1 leaderelection.go:258] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0920 00:24:54.175202       1 leaderelection.go:258] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0920 00:24:56.571521       1 request.go:1145] body was not decodable (unable to check for Status): Object 'Kind' is missing in '{\n  "paths": [\n    "/apis",\n    "/metrics",\n    "/version"\n  ]\n}'\nI0920 00:24:56.571549       1 workload_controller.go:325] No service bindings found, nothing to delete.\nI0920 00:24:56.579270       1 workload_controller.go:179] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0920 00:25:04.184899       1 leaderelection.go:258] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0920 00:25:14.193943       1 leaderelection.go:258] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0920 00:25:23.123395       1 observer_polling.go:78] Observed change: file:/var/run/secrets/serving-cert/tls.crt (current: "c49afe87134771692e30146305fa7a01a07e20ae9a08dffe053f503c241fe053", lastKnown: "723d233d714e6c93c675b0d0ede22975b1ea5ccd97196ca790dbd3e1066d80eb")\nW0920 00:25:23.123445       1 builder.go:108] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified\nF0920 00:25:23.123518       1 leaderelection.go:66] leaderelection lost\n
Sep 20 00:25:40.284 E ns/openshift-monitoring pod/prometheus-adapter-574f6dffc7-bcdcx node/ip-10-0-141-28.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0920 00:21:51.036303       1 adapter.go:93] successfully using in-cluster auth\nI0920 00:21:51.380299       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 20 00:25:43.492 E ns/openshift-controller-manager pod/controller-manager-9rdlf node/ip-10-0-128-186.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Sep 20 00:25:50.896 E ns/openshift-monitoring pod/prometheus-adapter-574f6dffc7-dslxv node/ip-10-0-148-225.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0920 00:21:33.861982       1 adapter.go:93] successfully using in-cluster auth\nI0920 00:21:34.394672       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 20 00:26:19.422 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-225.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): =info ts=2020-09-20T00:26:18.165Z caller=main.go:332 fd_limits="(soft=1048576, hard=1048576)"\nlevel=info ts=2020-09-20T00:26:18.165Z caller=main.go:333 vm_limits="(soft=unlimited, hard=unlimited)"\nlevel=info ts=2020-09-20T00:26:18.169Z caller=main.go:652 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T00:26:18.169Z caller=web.go:448 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T00:26:18.174Z caller=main.go:667 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T00:26:18.174Z caller=main.go:668 msg="TSDB started"\nlevel=info ts=2020-09-20T00:26:18.174Z caller=main.go:738 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T00:26:18.175Z caller=main.go:521 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T00:26:18.175Z caller=main.go:535 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T00:26:18.175Z caller=main.go:557 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T00:26:18.175Z caller=main.go:531 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T00:26:18.175Z caller=manager.go:776 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T00:26:18.175Z caller=main.go:517 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T00:26:18.175Z caller=manager.go:782 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T00:26:18.175Z caller=main.go:551 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T00:26:18.175Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T00:26:18.175Z caller=main.go:722 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20T00:26:18.176Z caller=main.go:731 err="error loading config from \"/etc/prometheus/config_out/prometheus.env.yaml\": couldn't load configuration (--config.file=\"/etc/prometheus/config_out/prometheus.env.yaml\"): open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 20 00:26:22.298 E ns/openshift-sdn pod/sdn-dqkk4 node/ip-10-0-141-28.us-west-1.compute.internal container=sdn container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:26:40.498 E ns/openshift-sdn pod/sdn-controller-q6nvm node/ip-10-0-128-186.us-west-1.compute.internal container=sdn-controller container exited with code 137 (Error): I0919 23:58:40.874390       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Sep 20 00:26:42.332 E ns/openshift-sdn pod/ovs-ft9hr node/ip-10-0-132-71.us-west-1.compute.internal container=openvswitch container exited with code 255 (Error): 2Z|00051|connmgr|INFO|br0<->unix#54: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-09-19T23:59:36.887Z|00052|bridge|INFO|bridge br0: added interface vethd8e1e67c on port 6\n2020-09-19T23:59:36.918Z|00053|connmgr|INFO|br0<->unix#57: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T23:59:36.957Z|00054|connmgr|INFO|br0<->unix#60: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T23:59:53.107Z|00055|connmgr|INFO|br0<->unix#66: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T23:59:53.136Z|00056|connmgr|INFO|br0<->unix#69: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T23:59:53.160Z|00057|bridge|INFO|bridge br0: deleted interface veth886776da on port 3\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-09-19T23:59:53.148Z|00017|jsonrpc|WARN|unix#46: receive error: Connection reset by peer\n2020-09-19T23:59:53.148Z|00018|reconnect|WARN|unix#46: connection dropped (Connection reset by peer)\n2020-09-19T23:59:53.153Z|00019|jsonrpc|WARN|unix#47: receive error: Connection reset by peer\n2020-09-19T23:59:53.153Z|00020|reconnect|WARN|unix#47: connection dropped (Connection reset by peer)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2020-09-19T23:59:53.826Z|00058|connmgr|INFO|br0<->unix#72: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T23:59:53.868Z|00059|connmgr|INFO|br0<->unix#75: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T23:59:53.892Z|00060|bridge|INFO|bridge br0: deleted interface vethd8e1e67c on port 6\n2020-09-19T23:59:53.930Z|00061|connmgr|INFO|br0<->unix#78: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T23:59:53.971Z|00062|connmgr|INFO|br0<->unix#81: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T23:59:53.998Z|00063|bridge|INFO|bridge br0: deleted interface veth528bfa6d on port 5\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2020-09-19T23:59:53.991Z|00021|jsonrpc|WARN|unix#57: receive error: Connection reset by peer\n2020-09-19T23:59:53.991Z|00022|reconnect|WARN|unix#57: connection dropped (Connection reset by peer)\nTerminated\novs-vswitchd is not running.\novsdb-server is not running.\n
Sep 20 00:26:45.913 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-51.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): =info ts=2020-09-20T00:26:43.212Z caller=main.go:332 fd_limits="(soft=1048576, hard=1048576)"\nlevel=info ts=2020-09-20T00:26:43.212Z caller=main.go:333 vm_limits="(soft=unlimited, hard=unlimited)"\nlevel=info ts=2020-09-20T00:26:43.214Z caller=main.go:652 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T00:26:43.214Z caller=web.go:448 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T00:26:43.227Z caller=main.go:667 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T00:26:43.227Z caller=main.go:668 msg="TSDB started"\nlevel=info ts=2020-09-20T00:26:43.227Z caller=main.go:738 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T00:26:43.227Z caller=main.go:521 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T00:26:43.227Z caller=main.go:535 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T00:26:43.227Z caller=main.go:557 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T00:26:43.227Z caller=main.go:531 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T00:26:43.227Z caller=main.go:517 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T00:26:43.227Z caller=main.go:551 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T00:26:43.227Z caller=manager.go:776 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T00:26:43.227Z caller=manager.go:782 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T00:26:43.227Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T00:26:43.227Z caller=main.go:722 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20T00:26:43.227Z caller=main.go:731 err="error loading config from \"/etc/prometheus/config_out/prometheus.env.yaml\": couldn't load configuration (--config.file=\"/etc/prometheus/config_out/prometheus.env.yaml\"): open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 20 00:27:15.656 E ns/openshift-multus pod/multus-tvc52 node/ip-10-0-132-71.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 20 00:27:16.892 E ns/openshift-sdn pod/sdn-5sg2x node/ip-10-0-128-186.us-west-1.compute.internal container=sdn container exited with code 255 (Error): I0920 00:27:16.741295   76439 cmd.go:253] Overriding kubernetes api to https://api-int.ci-op-3kz8n3rl-eb227.origin-ci-int-aws.dev.rhcloud.com:6443\nI0920 00:27:16.741397   76439 cmd.go:142] Reading node configuration from /config/sdn-config.yaml\nF0920 00:27:16.741416   76439 cmd.go:102] open /config/sdn-config.yaml: no such file or directory\n
Sep 20 00:27:18.247 E ns/openshift-sdn pod/sdn-gx6bn node/ip-10-0-137-51.us-west-1.compute.internal container=sdn container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:27:19.407 E ns/openshift-authentication-operator pod/authentication-operator-b5d58cb74-b7262 node/ip-10-0-151-235.us-west-1.compute.internal container=operator container exited with code 255 (Error): ed","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-20T00:03:09Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0920 00:26:10.252054       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"e239873c-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout" to ""\nI0920 00:26:33.393397       1 status_controller.go:165] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-20T00:09:55Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-20T00:26:33Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-20T00:09:55Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-20T00:03:09Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0920 00:26:33.401438       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"e239873c-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from True to False ("")\nI0920 00:27:18.500051       1 observer_polling.go:88] Observed change: file:/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt (current: "4d4d883ed057acecc7a9a45f5a2a367628b29c1e0a9a68bb57b2591637027813", lastKnown: "")\nI0920 00:27:18.500150       1 cmd.go:111] exiting because "/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt" changed\nF0920 00:27:18.500195       1 leaderelection.go:66] leaderelection lost\nF0920 00:27:18.511264       1 builder.go:217] server exited\n
Sep 20 00:27:20.405 E ns/openshift-sdn pod/sdn-controller-9tvbq node/ip-10-0-151-235.us-west-1.compute.internal container=sdn-controller container exited with code 137 (Error): urce\nE0920 00:21:13.247139       1 memcache.go:141] couldn't get resource list for scheduling.k8s.io/v1: the server could not find the requested resource\nE0920 00:21:13.248839       1 memcache.go:141] couldn't get resource list for coordination.k8s.io/v1: the server could not find the requested resource\nE0920 00:21:13.250512       1 memcache.go:141] couldn't get resource list for node.k8s.io/v1beta1: the server could not find the requested resource\nE0920 00:21:43.392683       1 memcache.go:141] couldn't get resource list for networking.k8s.io/v1beta1: the server could not find the requested resource\nE0920 00:21:43.399049       1 memcache.go:141] couldn't get resource list for scheduling.k8s.io/v1: the server could not find the requested resource\nE0920 00:21:43.400621       1 memcache.go:141] couldn't get resource list for coordination.k8s.io/v1: the server could not find the requested resource\nE0920 00:21:43.402236       1 memcache.go:141] couldn't get resource list for node.k8s.io/v1beta1: the server could not find the requested resource\nE0920 00:21:43.454594       1 memcache.go:141] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nW0920 00:21:53.172361       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 13881 (22538)\nW0920 00:21:53.465311       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 19710 (22538)\nW0920 00:26:04.820897       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 22538 (26453)\nW0920 00:26:05.225976       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 22538 (26461)\n
Sep 20 00:27:29.723 E ns/openshift-service-ca pod/apiservice-cabundle-injector-6869cf68d-6l4l7 node/ip-10-0-132-71.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Sep 20 00:27:41.809 E ns/openshift-monitoring pod/telemeter-client-56bf9657fc-xm8pr node/ip-10-0-137-51.us-west-1.compute.internal container=reload container exited with code 2 (Error): 
Sep 20 00:27:41.809 E ns/openshift-monitoring pod/telemeter-client-56bf9657fc-xm8pr node/ip-10-0-137-51.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Sep 20 00:27:48.644 E ns/openshift-sdn pod/ovs-mk9xr node/ip-10-0-151-235.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): nmgr|INFO|br0<->unix#985: 2 flow_mods in the last 0 s (2 adds)\n2020-09-20T00:27:45.916Z|00414|connmgr|INFO|br0<->unix#989: 1 flow_mods in the last 0 s (1 adds)\n2020-09-20T00:27:45.976Z|00415|connmgr|INFO|br0<->unix#997: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-20T00:27:46.003Z|00416|connmgr|INFO|br0<->unix#1000: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-20T00:27:46.029Z|00417|connmgr|INFO|br0<->unix#1003: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-20T00:27:46.058Z|00418|connmgr|INFO|br0<->unix#1006: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-20T00:27:46.084Z|00419|connmgr|INFO|br0<->unix#1009: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-20T00:27:46.112Z|00420|connmgr|INFO|br0<->unix#1012: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-20T00:27:46.137Z|00421|connmgr|INFO|br0<->unix#1015: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-20T00:27:46.160Z|00422|connmgr|INFO|br0<->unix#1018: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-20T00:27:46.183Z|00423|connmgr|INFO|br0<->unix#1021: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-20T00:27:46.313Z|00424|connmgr|INFO|br0<->unix#1024: 3 flow_mods in the last 0 s (3 adds)\n2020-09-20T00:27:46.335Z|00425|connmgr|INFO|br0<->unix#1027: 1 flow_mods in the last 0 s (1 adds)\n2020-09-20T00:27:46.361Z|00426|connmgr|INFO|br0<->unix#1030: 3 flow_mods in the last 0 s (3 adds)\n2020-09-20T00:27:46.390Z|00427|connmgr|INFO|br0<->unix#1033: 1 flow_mods in the last 0 s (1 adds)\n2020-09-20T00:27:46.421Z|00428|connmgr|INFO|br0<->unix#1036: 3 flow_mods in the last 0 s (3 adds)\n2020-09-20T00:27:46.451Z|00429|connmgr|INFO|br0<->unix#1039: 1 flow_mods in the last 0 s (1 adds)\n2020-09-20T00:27:46.481Z|00430|connmgr|INFO|br0<->unix#1042: 3 flow_mods in the last 0 s (3 adds)\n2020-09-20T00:27:46.515Z|00431|connmgr|INFO|br0<->unix#1045: 1 flow_mods in the last 0 s (1 adds)\n2020-09-20T00:27:46.543Z|00432|connmgr|INFO|br0<->unix#1048: 3 flow_mods in the last 0 s (3 adds)\n2020-09-20T00:27:46.573Z|00433|connmgr|INFO|br0<->unix#1051: 1 flow_mods in the last 0 s (1 adds)\n
Sep 20 00:27:55.382 E ns/openshift-sdn pod/sdn-htkqr node/ip-10-0-151-235.us-west-1.compute.internal container=sdn container exited with code 255 (Error): "openshift-monitoring/prometheus-k8s:tenancy" at 172.30.189.145:9092/TCP\nI0920 00:27:46.372798   79132 service.go:332] Adding new service port "openshift-cluster-version/cluster-version-operator:metrics" at 172.30.173.131:9099/TCP\nI0920 00:27:46.372816   79132 service.go:332] Adding new service port "openshift-controller-manager-operator/metrics:https" at 172.30.222.216:443/TCP\nI0920 00:27:46.372831   79132 service.go:332] Adding new service port "openshift-kube-apiserver/apiserver:https" at 172.30.38.38:443/TCP\nI0920 00:27:46.372846   79132 service.go:332] Adding new service port "openshift-authentication-operator/metrics:https" at 172.30.125.134:443/TCP\nI0920 00:27:46.373133   79132 proxier.go:675] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0920 00:27:46.512253   79132 proxier.go:367] userspace proxy: processing 0 service events\nI0920 00:27:46.512390   79132 proxier.go:346] userspace syncProxyRules took 140.237483ms\nI0920 00:27:46.605026   79132 proxier.go:1474] Opened local port "nodePort for openshift-ingress/router-default:http" (:32608/tcp)\nI0920 00:27:46.605125   79132 proxier.go:1474] Opened local port "nodePort for openshift-ingress/router-default:https" (:31245/tcp)\nI0920 00:27:46.605317   79132 proxier.go:1474] Opened local port "nodePort for e2e-k8s-service-upgrade-3743/service-test:" (:31094/tcp)\nI0920 00:27:46.650461   79132 healthcheck.go:151] Opening healthcheck "openshift-ingress/router-default" on port 31757\nI0920 00:27:46.659610   79132 proxy.go:303] openshift-sdn proxy services and endpoints initialized\nI0920 00:27:46.659646   79132 cmd.go:173] openshift-sdn network plugin registering startup\nI0920 00:27:46.659773   79132 cmd.go:177] openshift-sdn network plugin ready\nI0920 00:27:53.261240   79132 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0920 00:27:53.261393   79132 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 20 00:27:57.184 E ns/openshift-multus pod/multus-8jbtj node/ip-10-0-151-235.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 20 00:27:59.750 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-148-225.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 12: http: TLS handshake error from 10.128.2.16:59062: remote error: tls: error decrypting message\n2020/09/20 00:25:01 server.go:3012: http: TLS handshake error from 10.131.0.18:48516: remote error: tls: error decrypting message\n2020/09/20 00:25:06 server.go:3012: http: TLS handshake error from 10.128.2.16:59136: remote error: tls: error decrypting message\n2020/09/20 00:25:06 server.go:3012: http: TLS handshake error from 10.131.0.18:48574: remote error: tls: error decrypting message\n2020/09/20 00:25:11 server.go:3012: http: TLS handshake error from 10.128.2.16:59190: remote error: tls: error decrypting message\n2020/09/20 00:25:11 server.go:3012: http: TLS handshake error from 10.131.0.18:48620: remote error: tls: error decrypting message\n2020/09/20 00:25:16 server.go:3012: http: TLS handshake error from 10.128.2.16:59262: remote error: tls: error decrypting message\n2020/09/20 00:25:16 server.go:3012: http: TLS handshake error from 10.131.0.18:48678: remote error: tls: error decrypting message\n2020/09/20 00:25:21 server.go:3012: http: TLS handshake error from 10.128.2.16:59318: remote error: tls: error decrypting message\n2020/09/20 00:25:21 server.go:3012: http: TLS handshake error from 10.131.0.18:48722: remote error: tls: error decrypting message\n2020/09/20 00:25:26 server.go:3012: http: TLS handshake error from 10.131.0.18:48782: remote error: tls: error decrypting message\n2020/09/20 00:25:26 server.go:3012: http: TLS handshake error from 10.128.2.16:59392: remote error: tls: error decrypting message\n2020/09/20 00:25:31 server.go:3012: http: TLS handshake error from 10.128.2.16:59432: remote error: tls: error decrypting message\n2020/09/20 00:25:31 server.go:3012: http: TLS handshake error from 10.131.0.18:48826: remote error: tls: error decrypting message\n2020/09/20 00:25:36 server.go:3012: http: TLS handshake error from 10.128.2.16:59500: remote error: tls: error decrypting message\n2020/09/20 00:25:36 server.go:3012: http: TLS handshake error from 10.131.0.18:48888: remote error: tls: error decrypting message\n
Sep 20 00:27:59.750 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-148-225.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/09/20 00:22:32 Watching directory: "/etc/alertmanager/config"\n
Sep 20 00:28:10.579 E ns/openshift-console pod/console-78c6f486bb-2x7v7 node/ip-10-0-151-235.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/09/20 00:24:33 cmd/main: cookies are secure!\n2020/09/20 00:24:33 cmd/main: Binding to 0.0.0.0:8443...\n2020/09/20 00:24:33 cmd/main: using TLS\n
Sep 20 00:28:14.635 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-141-28.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 12: http: TLS handshake error from 10.128.2.16:39238: remote error: tls: error decrypting message\n2020/09/20 00:25:01 server.go:3012: http: TLS handshake error from 10.131.0.18:50940: remote error: tls: error decrypting message\n2020/09/20 00:25:06 server.go:3012: http: TLS handshake error from 10.128.2.16:39316: remote error: tls: error decrypting message\n2020/09/20 00:25:06 server.go:3012: http: TLS handshake error from 10.131.0.18:50996: remote error: tls: error decrypting message\n2020/09/20 00:25:11 server.go:3012: http: TLS handshake error from 10.128.2.16:39368: remote error: tls: error decrypting message\n2020/09/20 00:25:11 server.go:3012: http: TLS handshake error from 10.131.0.18:51044: remote error: tls: error decrypting message\n2020/09/20 00:25:16 server.go:3012: http: TLS handshake error from 10.128.2.16:39442: remote error: tls: error decrypting message\n2020/09/20 00:25:16 server.go:3012: http: TLS handshake error from 10.131.0.18:51100: remote error: tls: error decrypting message\n2020/09/20 00:25:21 server.go:3012: http: TLS handshake error from 10.128.2.16:39494: remote error: tls: error decrypting message\n2020/09/20 00:25:21 server.go:3012: http: TLS handshake error from 10.131.0.18:51146: remote error: tls: error decrypting message\n2020/09/20 00:25:27 server.go:3012: http: TLS handshake error from 10.131.0.18:51206: remote error: tls: error decrypting message\n2020/09/20 00:25:27 server.go:3012: http: TLS handshake error from 10.128.2.16:39568: remote error: tls: error decrypting message\n2020/09/20 00:25:32 server.go:3012: http: TLS handshake error from 10.128.2.16:39608: remote error: tls: error decrypting message\n2020/09/20 00:25:32 server.go:3012: http: TLS handshake error from 10.131.0.18:51252: remote error: tls: error decrypting message\n2020/09/20 00:25:37 server.go:3012: http: TLS handshake error from 10.128.2.16:39676: remote error: tls: error decrypting message\n2020/09/20 00:25:37 server.go:3012: http: TLS handshake error from 10.131.0.18:51312: remote error: tls: error decrypting message\n
Sep 20 00:28:14.635 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-141-28.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/09/20 00:22:42 Watching directory: "/etc/alertmanager/config"\n
Sep 20 00:28:37.285 E ns/openshift-multus pod/multus-dgn7k node/ip-10-0-128-186.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 20 00:28:43.613 E ns/openshift-sdn pod/ovs-5qgt8 node/ip-10-0-128-186.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): th2aa80755 (No such device)\n2020-09-20T00:28:17.441Z|00561|bridge|WARN|could not open network device veth2aa80755 (No such device)\n2020-09-20T00:28:17.450Z|00562|bridge|INFO|bridge br0: added interface vethda35d51a on port 85\n2020-09-20T00:28:17.453Z|00563|bridge|WARN|could not open network device veth2aa80755 (No such device)\n2020-09-20T00:28:17.461Z|00564|bridge|WARN|could not open network device veth2aa80755 (No such device)\n2020-09-20T00:28:17.488Z|00565|connmgr|INFO|br0<->unix#1268: 5 flow_mods in the last 0 s (5 adds)\n2020-09-20T00:28:17.547Z|00566|connmgr|INFO|br0<->unix#1271: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-20T00:28:26.400Z|00567|connmgr|INFO|br0<->unix#1274: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-20T00:28:26.451Z|00568|connmgr|INFO|br0<->unix#1277: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-20T00:28:26.494Z|00569|bridge|INFO|bridge br0: deleted interface veth7a91b538 on port 60\n2020-09-20T00:28:26.508Z|00570|bridge|WARN|could not open network device veth2aa80755 (No such device)\n2020-09-20T00:28:26.514Z|00571|bridge|WARN|could not open network device veth2aa80755 (No such device)\n2020-09-20T00:28:26.587Z|00572|bridge|WARN|could not open network device veth2aa80755 (No such device)\n2020-09-20T00:28:26.596Z|00573|bridge|WARN|could not open network device veth2aa80755 (No such device)\n2020-09-20T00:28:38.866Z|00574|bridge|WARN|could not open network device veth2aa80755 (No such device)\n2020-09-20T00:28:38.881Z|00575|bridge|WARN|could not open network device veth2aa80755 (No such device)\n2020-09-20T00:28:38.900Z|00576|bridge|INFO|bridge br0: added interface vethef8db693 on port 86\n2020-09-20T00:28:38.903Z|00577|bridge|WARN|could not open network device veth2aa80755 (No such device)\n2020-09-20T00:28:38.911Z|00578|bridge|WARN|could not open network device veth2aa80755 (No such device)\n2020-09-20T00:28:38.935Z|00579|connmgr|INFO|br0<->unix#1283: 5 flow_mods in the last 0 s (5 adds)\n2020-09-20T00:28:38.975Z|00580|connmgr|INFO|br0<->unix#1286: 2 flow_mods in the last 0 s (2 deletes)\n
Sep 20 00:28:47.613 E ns/openshift-sdn pod/sdn-hkgwx node/ip-10-0-128-186.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 10.129.2.28:9093 10.131.0.29:9093]\nI0920 00:28:32.654926   77821 roundrobin.go:240] Delete endpoint 10.131.0.29:9093 for service "openshift-monitoring/alertmanager-operated:web"\nI0920 00:28:32.655010   77821 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/alertmanager-operated:mesh-udp to [10.128.2.22:9094 10.129.2.28:9094 10.131.0.29:9094]\nI0920 00:28:32.655062   77821 roundrobin.go:240] Delete endpoint 10.131.0.29:9094 for service "openshift-monitoring/alertmanager-operated:mesh-udp"\nI0920 00:28:32.655255   77821 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/alertmanager-main:web to [10.128.2.22:9095 10.129.2.28:9095 10.131.0.29:9095]\nI0920 00:28:32.655349   77821 roundrobin.go:240] Delete endpoint 10.131.0.29:9095 for service "openshift-monitoring/alertmanager-main:web"\nI0920 00:28:32.655447   77821 proxy.go:331] hybrid proxy: syncProxyRules start\nI0920 00:28:32.849333   77821 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0920 00:28:32.942974   77821 proxier.go:367] userspace proxy: processing 0 service events\nI0920 00:28:32.943003   77821 proxier.go:346] userspace syncProxyRules took 93.634057ms\nI0920 00:28:32.943017   77821 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0920 00:28:32.943032   77821 proxy.go:331] hybrid proxy: syncProxyRules start\nI0920 00:28:33.148057   77821 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0920 00:28:33.224717   77821 proxier.go:367] userspace proxy: processing 0 service events\nI0920 00:28:33.224742   77821 proxier.go:346] userspace syncProxyRules took 76.660689ms\nI0920 00:28:33.224753   77821 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0920 00:28:46.995140   77821 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0920 00:28:46.995196   77821 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 20 00:29:19.752 E ns/openshift-multus pod/multus-lmtsn node/ip-10-0-141-28.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 20 00:29:33.946 E ns/openshift-sdn pod/ovs-7hgwj node/ip-10-0-148-225.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): \n2020-09-20T00:27:59.374Z|00183|connmgr|INFO|br0<->unix#498: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-20T00:27:59.399Z|00184|bridge|INFO|bridge br0: deleted interface vethd664c8d8 on port 21\n2020-09-20T00:27:59.498Z|00185|connmgr|INFO|br0<->unix#501: 3 flow_mods in the last 0 s (3 adds)\n2020-09-20T00:27:59.522Z|00186|connmgr|INFO|br0<->unix#504: 1 flow_mods in the last 0 s (1 adds)\n2020-09-20T00:27:59.554Z|00187|connmgr|INFO|br0<->unix#507: 3 flow_mods in the last 0 s (3 adds)\n2020-09-20T00:27:59.580Z|00188|connmgr|INFO|br0<->unix#510: 1 flow_mods in the last 0 s (1 adds)\n2020-09-20T00:27:59.609Z|00189|connmgr|INFO|br0<->unix#513: 3 flow_mods in the last 0 s (3 adds)\n2020-09-20T00:27:59.638Z|00190|connmgr|INFO|br0<->unix#516: 1 flow_mods in the last 0 s (1 adds)\n2020-09-20T00:27:59.661Z|00191|connmgr|INFO|br0<->unix#519: 3 flow_mods in the last 0 s (3 adds)\n2020-09-20T00:27:59.692Z|00192|connmgr|INFO|br0<->unix#522: 1 flow_mods in the last 0 s (1 adds)\n2020-09-20T00:27:59.727Z|00193|connmgr|INFO|br0<->unix#525: 3 flow_mods in the last 0 s (3 adds)\n2020-09-20T00:27:59.763Z|00194|connmgr|INFO|br0<->unix#528: 1 flow_mods in the last 0 s (1 adds)\n2020-09-20T00:28:10.597Z|00195|bridge|INFO|bridge br0: added interface veth4a6586ea on port 29\n2020-09-20T00:28:10.628Z|00196|connmgr|INFO|br0<->unix#531: 5 flow_mods in the last 0 s (5 adds)\n2020-09-20T00:28:10.665Z|00197|connmgr|INFO|br0<->unix#534: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-20T00:29:01.528Z|00198|connmgr|INFO|br0<->unix#543: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-20T00:29:01.568Z|00199|connmgr|INFO|br0<->unix#546: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-20T00:29:01.594Z|00200|bridge|INFO|bridge br0: deleted interface veth05b222f7 on port 3\n2020-09-20T00:29:10.938Z|00201|bridge|INFO|bridge br0: added interface veth854c3f71 on port 30\n2020-09-20T00:29:10.968Z|00202|connmgr|INFO|br0<->unix#549: 5 flow_mods in the last 0 s (5 adds)\n2020-09-20T00:29:11.004Z|00203|connmgr|INFO|br0<->unix#552: 2 flow_mods in the last 0 s (2 deletes)\n
Sep 20 00:29:35.954 E ns/openshift-sdn pod/sdn-cnfkk node/ip-10-0-148-225.us-west-1.compute.internal container=sdn container exited with code 255 (Error): rspace proxy: processing 0 service events\nI0920 00:29:10.544916   53485 proxier.go:346] userspace syncProxyRules took 62.260633ms\nI0920 00:29:10.544928   53485 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0920 00:29:22.889435   53485 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-dns/dns-default:dns to [10.128.0.65:5353 10.128.2.20:5353 10.129.0.84:5353 10.129.2.29:5353 10.130.0.70:5353 10.131.0.28:5353]\nI0920 00:29:22.889473   53485 roundrobin.go:240] Delete endpoint 10.129.2.29:5353 for service "openshift-dns/dns-default:dns"\nI0920 00:29:22.889494   53485 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-dns/dns-default:dns-tcp to [10.128.0.65:5353 10.128.2.20:5353 10.129.0.84:5353 10.129.2.29:5353 10.130.0.70:5353 10.131.0.28:5353]\nI0920 00:29:22.889512   53485 roundrobin.go:240] Delete endpoint 10.129.2.29:5353 for service "openshift-dns/dns-default:dns-tcp"\nI0920 00:29:22.889526   53485 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-dns/dns-default:metrics to [10.128.0.65:9153 10.128.2.20:9153 10.129.0.84:9153 10.129.2.29:9153 10.130.0.70:9153 10.131.0.28:9153]\nI0920 00:29:22.889537   53485 roundrobin.go:240] Delete endpoint 10.129.2.29:9153 for service "openshift-dns/dns-default:metrics"\nI0920 00:29:22.889664   53485 proxy.go:331] hybrid proxy: syncProxyRules start\nI0920 00:29:23.037651   53485 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0920 00:29:23.099719   53485 proxier.go:367] userspace proxy: processing 0 service events\nI0920 00:29:23.099740   53485 proxier.go:346] userspace syncProxyRules took 62.065558ms\nI0920 00:29:23.099753   53485 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0920 00:29:35.816170   53485 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0920 00:29:35.816223   53485 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 20 00:30:28.289 E ns/openshift-sdn pod/sdn-qrbmv node/ip-10-0-137-51.us-west-1.compute.internal container=sdn container exited with code 255 (Error): rvice events\nI0920 00:30:09.421147   50774 proxier.go:346] userspace syncProxyRules took 67.574162ms\nI0920 00:30:09.421155   50774 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0920 00:30:26.247852   50774 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-authentication/oauth-openshift:https to [10.128.0.64:6443 10.128.0.73:6443 10.130.0.71:6443]\nI0920 00:30:26.247892   50774 roundrobin.go:240] Delete endpoint 10.128.0.73:6443 for service "openshift-authentication/oauth-openshift:https"\nI0920 00:30:26.247952   50774 proxy.go:331] hybrid proxy: syncProxyRules start\nI0920 00:30:26.291100   50774 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-authentication/oauth-openshift:https to [10.128.0.73:6443 10.130.0.71:6443]\nI0920 00:30:26.291139   50774 roundrobin.go:240] Delete endpoint 10.128.0.64:6443 for service "openshift-authentication/oauth-openshift:https"\nI0920 00:30:26.406576   50774 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0920 00:30:26.469483   50774 proxier.go:367] userspace proxy: processing 0 service events\nI0920 00:30:26.469501   50774 proxier.go:346] userspace syncProxyRules took 62.904058ms\nI0920 00:30:26.469509   50774 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0920 00:30:26.469520   50774 proxy.go:331] hybrid proxy: syncProxyRules start\nI0920 00:30:26.618601   50774 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0920 00:30:26.681342   50774 proxier.go:367] userspace proxy: processing 0 service events\nI0920 00:30:26.681361   50774 proxier.go:346] userspace syncProxyRules took 62.739806ms\nI0920 00:30:26.681368   50774 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0920 00:30:28.148881   50774 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0920 00:30:28.148922   50774 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 20 00:31:16.975 E ns/openshift-sdn pod/ovs-w8mcb node/ip-10-0-141-28.us-west-1.compute.internal container=openvswitch container exited with code 137 (Error): 0T00:26:47.862Z|00174|connmgr|INFO|br0<->unix#476: 1 flow_mods in the last 0 s (1 adds)\n2020-09-20T00:26:47.894Z|00175|connmgr|INFO|br0<->unix#479: 3 flow_mods in the last 0 s (3 adds)\n2020-09-20T00:26:47.923Z|00176|connmgr|INFO|br0<->unix#482: 1 flow_mods in the last 0 s (1 adds)\n2020-09-20T00:26:58.347Z|00177|bridge|INFO|bridge br0: added interface vethd8056710 on port 28\n2020-09-20T00:26:58.387Z|00178|connmgr|INFO|br0<->unix#485: 5 flow_mods in the last 0 s (5 adds)\n2020-09-20T00:26:58.439Z|00179|connmgr|INFO|br0<->unix#488: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-20T00:27:01.648Z|00180|connmgr|INFO|br0<->unix#491: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-20T00:27:01.680Z|00181|connmgr|INFO|br0<->unix#494: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-20T00:27:01.712Z|00182|bridge|INFO|bridge br0: deleted interface veth9ca51073 on port 3\n2020-09-20T00:27:18.450Z|00183|connmgr|INFO|br0<->unix#500: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-20T00:27:18.491Z|00184|connmgr|INFO|br0<->unix#503: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-20T00:27:18.519Z|00185|bridge|INFO|bridge br0: deleted interface veth61bbe045 on port 17\n2020-09-20T00:27:24.716Z|00186|bridge|INFO|bridge br0: added interface veth7caaaf3a on port 29\n2020-09-20T00:27:24.745Z|00187|connmgr|INFO|br0<->unix#506: 5 flow_mods in the last 0 s (5 adds)\n2020-09-20T00:27:24.780Z|00188|connmgr|INFO|br0<->unix#509: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-20T00:28:14.179Z|00189|connmgr|INFO|br0<->unix#515: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-20T00:28:14.220Z|00190|connmgr|INFO|br0<->unix#518: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-20T00:28:14.253Z|00191|bridge|INFO|bridge br0: deleted interface vethf6a749be on port 21\n2020-09-20T00:28:27.663Z|00192|bridge|INFO|bridge br0: added interface vethbe990590 on port 30\n2020-09-20T00:28:27.694Z|00193|connmgr|INFO|br0<->unix#524: 5 flow_mods in the last 0 s (5 adds)\n2020-09-20T00:28:27.734Z|00194|connmgr|INFO|br0<->unix#527: 2 flow_mods in the last 0 s (2 deletes)\n
Sep 20 00:31:18.991 E ns/openshift-sdn pod/sdn-zsd78 node/ip-10-0-141-28.us-west-1.compute.internal container=sdn container exited with code 255 (Error): y.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0920 00:30:28.484842   49386 proxier.go:367] userspace proxy: processing 0 service events\nI0920 00:30:28.484862   49386 proxier.go:346] userspace syncProxyRules took 62.539346ms\nI0920 00:30:28.484872   49386 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0920 00:30:35.077917   49386 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.128.186:9101 10.0.132.71:9101 10.0.137.51:9101 10.0.141.28:9101 10.0.148.225:9101 10.0.151.235:9101]\nI0920 00:30:35.077949   49386 roundrobin.go:240] Delete endpoint 10.0.137.51:9101 for service "openshift-sdn/sdn:metrics"\nI0920 00:30:35.078014   49386 proxy.go:331] hybrid proxy: syncProxyRules start\nI0920 00:30:35.232272   49386 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0920 00:30:35.294766   49386 proxier.go:367] userspace proxy: processing 0 service events\nI0920 00:30:35.294789   49386 proxier.go:346] userspace syncProxyRules took 62.496174ms\nI0920 00:30:35.294799   49386 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0920 00:31:05.294980   49386 proxy.go:331] hybrid proxy: syncProxyRules start\nI0920 00:31:05.449288   49386 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0920 00:31:05.512390   49386 proxier.go:367] userspace proxy: processing 0 service events\nI0920 00:31:05.512416   49386 proxier.go:346] userspace syncProxyRules took 63.106114ms\nI0920 00:31:05.512426   49386 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0920 00:31:17.492597   49386 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0920 00:31:18.845525   49386 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0920 00:31:18.845564   49386 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 20 00:38:39.319 E ns/openshift-machine-config-operator pod/machine-config-server-56b68 node/ip-10-0-128-186.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): 
Sep 20 00:38:53.834 E ns/openshift-machine-config-operator pod/machine-config-operator-7cf9d76f99-8kkg7 node/ip-10-0-151-235.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Sep 20 00:38:56.835 E ns/openshift-machine-api pod/machine-api-operator-9bfd7cf79-l48z9 node/ip-10-0-151-235.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Sep 20 00:38:58.437 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-7f5b7cbf56-vf4c2 node/ip-10-0-151-235.us-west-1.compute.internal container=guard container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:39:01.842 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7d8fcd75f9-4q6w2 node/ip-10-0-151-235.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): son":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0920 00:29:12.288286       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"da17c032-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("Progressing: 3 nodes are at revision 7"),Available message changed from "Available: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7" to "Available: 3 nodes are active; 3 nodes are at revision 7"\nI0920 00:29:13.273812       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"da17c032-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-7 -n openshift-kube-scheduler: cause by changes in data.status\nI0920 00:29:17.879971       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"da17c032-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-7-ip-10-0-151-235.us-west-1.compute.internal -n openshift-kube-scheduler because it was missing\nW0920 00:32:44.648706       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 26203 (30043)\nW0920 00:34:48.998152       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 26212 (30681)\nI0920 00:38:54.617149       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0920 00:38:54.617292       1 builder.go:217] server exited\n
Sep 20 00:39:04.637 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-b8c85bd4c-6gt7l node/ip-10-0-151-235.us-west-1.compute.internal container=manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:39:05.235 E ns/openshift-cluster-machine-approver pod/machine-approver-6774bfd8f9-w8f94 node/ip-10-0-151-235.us-west-1.compute.internal container=machine-approver-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:39:06.663 E ns/openshift-authentication-operator pod/authentication-operator-b5d58cb74-b7262 node/ip-10-0-151-235.us-west-1.compute.internal container=operator container exited with code 255 (Error): 2791       1 status_controller.go:165] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-20T00:09:55Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-20T00:30:38Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-20T00:09:55Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-20T00:03:09Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0920 00:30:38.210439       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"e239873c-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout" to "",Progressing changed from True to False ("")\nW0920 00:35:40.516158       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 29239 (30918)\nW0920 00:35:56.512156       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 29239 (31011)\nW0920 00:36:33.520369       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 29239 (31268)\nW0920 00:38:25.520401       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 29239 (31813)\nW0920 00:38:42.489653       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Deployment ended with: too old resource version: 29492 (29643)\nI0920 00:38:53.436215       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0920 00:38:53.436280       1 leaderelection.go:66] leaderelection lost\n
Sep 20 00:39:09.073 E ns/openshift-operator-lifecycle-manager pod/packageserver-6c9cffb8d8-6sdsh node/ip-10-0-128-186.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:39:11.263 E ns/openshift-machine-config-operator pod/machine-config-server-2rn2h node/ip-10-0-151-235.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): 
Sep 20 00:39:16.908 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-28.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): =info ts=2020-09-20T00:39:07.405Z caller=main.go:332 fd_limits="(soft=1048576, hard=1048576)"\nlevel=info ts=2020-09-20T00:39:07.405Z caller=main.go:333 vm_limits="(soft=unlimited, hard=unlimited)"\nlevel=info ts=2020-09-20T00:39:07.407Z caller=main.go:652 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T00:39:07.407Z caller=web.go:448 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T00:39:07.412Z caller=main.go:667 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T00:39:07.412Z caller=main.go:668 msg="TSDB started"\nlevel=info ts=2020-09-20T00:39:07.412Z caller=main.go:738 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T00:39:07.412Z caller=main.go:521 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T00:39:07.412Z caller=main.go:535 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T00:39:07.412Z caller=main.go:557 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T00:39:07.412Z caller=main.go:531 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T00:39:07.412Z caller=main.go:517 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T00:39:07.412Z caller=manager.go:776 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T00:39:07.412Z caller=main.go:551 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T00:39:07.412Z caller=manager.go:782 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T00:39:07.413Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T00:39:07.413Z caller=main.go:722 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20T00:39:07.413Z caller=main.go:731 err="error loading config from \"/etc/prometheus/config_out/prometheus.env.yaml\": couldn't load configuration (--config.file=\"/etc/prometheus/config_out/prometheus.env.yaml\"): open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 20 00:39:20.758 E ns/openshift-machine-config-operator pod/machine-config-server-752bb node/ip-10-0-132-71.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): 
Sep 20 00:39:25.589 E ns/openshift-console pod/downloads-78c74495d-4knvx node/ip-10-0-151-235.us-west-1.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:39:32.979 E ns/openshift-operator-lifecycle-manager pod/packageserver-6c9cffb8d8-x9hc6 node/ip-10-0-132-71.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:39:54.990 E ns/openshift-marketplace pod/redhat-operators-7958bd589f-hqnxb node/ip-10-0-141-28.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Sep 20 00:40:35.073 E ns/openshift-marketplace pod/community-operators-7dbc47d57d-zn8kb node/ip-10-0-141-28.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Sep 20 00:41:33.420 E ns/openshift-cluster-node-tuning-operator pod/tuned-zh46f node/ip-10-0-137-51.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:41:34.621 E ns/openshift-monitoring pod/node-exporter-lz5fr node/ip-10-0-137-51.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:41:35.019 E ns/openshift-image-registry pod/node-ca-ndvxn node/ip-10-0-137-51.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:41:35.420 E ns/openshift-dns pod/dns-default-blhvb node/ip-10-0-137-51.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:41:36.218 E ns/openshift-multus pod/multus-npwmc node/ip-10-0-137-51.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:41:36.619 E ns/openshift-sdn pod/ovs-w88pp node/ip-10-0-137-51.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:41:37.019 E ns/openshift-machine-config-operator pod/machine-config-daemon-xkqlt node/ip-10-0-137-51.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:18.257 E ns/openshift-cluster-version pod/cluster-version-operator-7db88ff76f-9qzcg node/ip-10-0-151-235.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:18.660 E ns/openshift-dns pod/dns-default-hwjjl node/ip-10-0-151-235.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:19.858 E ns/openshift-sdn pod/sdn-controller-7c9x9 node/ip-10-0-151-235.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:20.259 E ns/openshift-sdn pod/ovs-fj82x node/ip-10-0-151-235.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:20.660 E ns/openshift-multus pod/multus-admission-controller-7kw7f node/ip-10-0-151-235.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:21.061 E ns/openshift-image-registry pod/node-ca-cn8kg node/ip-10-0-151-235.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:21.463 E ns/openshift-monitoring pod/node-exporter-4p7lv node/ip-10-0-151-235.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:21.858 E ns/openshift-cluster-node-tuning-operator pod/tuned-s4kjr node/ip-10-0-151-235.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:22.270 E ns/openshift-controller-manager pod/controller-manager-29krt node/ip-10-0-151-235.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:22.328 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-141-28.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/09/20 00:28:29 Watching directory: "/etc/alertmanager/config"\n
Sep 20 00:42:22.328 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-141-28.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/20 00:28:30 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 00:28:30 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 00:28:30 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 00:28:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/20 00:28:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 00:28:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 00:28:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 00:28:30 http.go:106: HTTPS: listening on [::]:9095\n
Sep 20 00:42:22.659 E ns/openshift-multus pod/multus-7j7s5 node/ip-10-0-151-235.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:23.059 E ns/openshift-machine-config-operator pod/machine-config-daemon-2b4g2 node/ip-10-0-151-235.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:23.857 E ns/openshift-machine-config-operator pod/machine-config-server-czjj5 node/ip-10-0-151-235.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:42:37.567 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-51.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): =info ts=2020-09-20T00:42:35.615Z caller=main.go:332 fd_limits="(soft=1048576, hard=1048576)"\nlevel=info ts=2020-09-20T00:42:35.615Z caller=main.go:333 vm_limits="(soft=unlimited, hard=unlimited)"\nlevel=info ts=2020-09-20T00:42:35.618Z caller=web.go:448 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T00:42:35.618Z caller=main.go:652 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T00:42:35.636Z caller=main.go:667 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T00:42:35.636Z caller=main.go:668 msg="TSDB started"\nlevel=info ts=2020-09-20T00:42:35.636Z caller=main.go:738 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T00:42:35.636Z caller=main.go:521 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T00:42:35.636Z caller=main.go:535 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T00:42:35.636Z caller=main.go:557 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T00:42:35.636Z caller=main.go:531 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T00:42:35.636Z caller=main.go:517 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T00:42:35.636Z caller=manager.go:776 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T00:42:35.636Z caller=manager.go:782 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T00:42:35.636Z caller=main.go:551 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T00:42:35.637Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T00:42:35.637Z caller=main.go:722 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20T00:42:35.637Z caller=main.go:731 err="error loading config from \"/etc/prometheus/config_out/prometheus.env.yaml\": couldn't load configuration (--config.file=\"/etc/prometheus/config_out/prometheus.env.yaml\"): open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 20 00:42:48.158 E clusteroperator/monitoring changed Degraded to True: UpdatingGrafanaFailed: Failed to rollout the stack. Error: running task Updating Grafana failed: waiting for Grafana Route to become ready failed: waiting for RouteReady of grafana: the server is currently unable to handle the request (get routes.route.openshift.io grafana)
Sep 20 00:43:11.067 E ns/openshift-machine-config-operator pod/machine-config-operator-7cf9d76f99-86nts node/ip-10-0-132-71.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): 
Sep 20 00:43:13.668 E ns/openshift-service-ca-operator pod/service-ca-operator-9dd88c4c9-72tgz node/ip-10-0-132-71.us-west-1.compute.internal container=operator container exited with code 255 (Error): 
Sep 20 00:43:15.668 E ns/openshift-service-ca pod/apiservice-cabundle-injector-6869cf68d-6l4l7 node/ip-10-0-132-71.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Sep 20 00:43:18.670 E ns/openshift-cluster-machine-approver pod/machine-approver-6774bfd8f9-z5tpq node/ip-10-0-132-71.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0920 00:39:03.213393       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0920 00:39:03.213477       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0920 00:39:03.213553       1 main.go:229] Starting Machine Approver\nI0920 00:39:03.318592       1 main.go:139] CSR csr-f4ff4 added\nI0920 00:39:03.318631       1 main.go:142] CSR csr-f4ff4 is already approved\nI0920 00:39:03.318663       1 main.go:139] CSR csr-jsvqv added\nI0920 00:39:03.318672       1 main.go:142] CSR csr-jsvqv is already approved\nI0920 00:39:03.318689       1 main.go:139] CSR csr-ks5vd added\nI0920 00:39:03.318704       1 main.go:142] CSR csr-ks5vd is already approved\nI0920 00:39:03.318717       1 main.go:139] CSR csr-mqxs8 added\nI0920 00:39:03.318725       1 main.go:142] CSR csr-mqxs8 is already approved\nI0920 00:39:03.318747       1 main.go:139] CSR csr-zgn9v added\nI0920 00:39:03.318756       1 main.go:142] CSR csr-zgn9v is already approved\nI0920 00:39:03.318774       1 main.go:139] CSR csr-4m8dm added\nI0920 00:39:03.318783       1 main.go:142] CSR csr-4m8dm is already approved\nI0920 00:39:03.318804       1 main.go:139] CSR csr-gbr44 added\nI0920 00:39:03.318813       1 main.go:142] CSR csr-gbr44 is already approved\nI0920 00:39:03.318832       1 main.go:139] CSR csr-jpk9v added\nI0920 00:39:03.318843       1 main.go:142] CSR csr-jpk9v is already approved\nI0920 00:39:03.318861       1 main.go:139] CSR csr-njqhw added\nI0920 00:39:03.318869       1 main.go:142] CSR csr-njqhw is already approved\nI0920 00:39:03.318887       1 main.go:139] CSR csr-q6zn2 added\nI0920 00:39:03.318896       1 main.go:142] CSR csr-q6zn2 is already approved\nI0920 00:39:03.318913       1 main.go:139] CSR csr-xnfsb added\nI0920 00:39:03.318922       1 main.go:142] CSR csr-xnfsb is already approved\nI0920 00:39:03.318938       1 main.go:139] CSR csr-5wp2g added\nI0920 00:39:03.318947       1 main.go:142] CSR csr-5wp2g is already approved\n
Sep 20 00:43:19.268 E ns/openshift-cluster-samples-operator pod/cluster-samples-operator-6bc77cdc87-449xm node/ip-10-0-132-71.us-west-1.compute.internal container=cluster-samples-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:43:19.873 E ns/openshift-machine-api pod/machine-api-controllers-d7fc644cf-w6wxj node/ip-10-0-132-71.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Sep 20 00:43:20.469 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-5d5dcc5cb7-tmwv7 node/ip-10-0-132-71.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): sage:Network plugin returns error: Missing CNI default network)"\nI0920 00:42:55.135181       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"da044be0-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-151-235.us-west-1.compute.internal pods/kube-apiserver-ip-10-0-151-235.us-west-1.compute.internal container=\"kube-apiserver-9\" is not ready\nNodeControllerDegraded: The master nodes not ready: node \"ip-10-0-151-235.us-west-1.compute.internal\" not ready since 2020-09-20 00:42:05 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)" to "StaticPodsDegraded: nodes/ip-10-0-151-235.us-west-1.compute.internal pods/kube-apiserver-ip-10-0-151-235.us-west-1.compute.internal container=\"kube-apiserver-9\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0920 00:42:57.805722       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"da044be0-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-151-235.us-west-1.compute.internal pods/kube-apiserver-ip-10-0-151-235.us-west-1.compute.internal container=\"kube-apiserver-9\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"\nI0920 00:43:06.910609       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0920 00:43:06.910810       1 leaderelection.go:66] leaderelection lost\n
Sep 20 00:43:21.670 E ns/openshift-authentication-operator pod/authentication-operator-b5d58cb74-mscfd node/ip-10-0-132-71.us-west-1.compute.internal container=operator container exited with code 255 (Error): ogressingOAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-09-20T00:09:55Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-20T00:03:09Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0920 00:41:01.451684       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"e239873c-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "OAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-browser-client)" to "",Progressing changed from False to True ("Progressing: not all deployment replicas are ready")\nI0920 00:41:17.676320       1 status_controller.go:165] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-20T00:09:55Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-20T00:41:17Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-20T00:09:55Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-20T00:03:09Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0920 00:41:17.683975       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"e239873c-fad3-11ea-a004-062f4b0da8c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from True to False ("")\nI0920 00:43:02.535520       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0920 00:43:02.536272       1 leaderelection.go:66] leaderelection lost\n
Sep 20 00:43:24.683 E ns/openshift-machine-api pod/machine-api-operator-9bfd7cf79-ldxlp node/ip-10-0-132-71.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Sep 20 00:44:34.617 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 00:45:35.789 E ns/openshift-cluster-node-tuning-operator pod/tuned-qxl84 node/ip-10-0-141-28.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:45:36.168 E ns/openshift-monitoring pod/node-exporter-zstdl node/ip-10-0-141-28.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:45:36.604 E ns/openshift-image-registry pod/node-ca-q4gr2 node/ip-10-0-141-28.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:45:37.477 E ns/openshift-dns pod/dns-default-gk5l8 node/ip-10-0-141-28.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:45:37.914 E ns/openshift-multus pod/multus-x4t2r node/ip-10-0-141-28.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:45:38.349 E ns/openshift-sdn pod/ovs-z8v4v node/ip-10-0-141-28.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:45:38.773 E ns/openshift-machine-config-operator pod/machine-config-daemon-td2ls node/ip-10-0-141-28.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:45:40.011 E ns/openshift-authentication pod/oauth-openshift-5df468bbb5-hd2vp node/ip-10-0-128-186.us-west-1.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:46:06.964 E ns/openshift-controller-manager pod/controller-manager-z5h6f node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:07.364 E ns/openshift-machine-config-operator pod/machine-config-server-nklml node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:08.172 E ns/openshift-sdn pod/sdn-s8rjg node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:08.572 E ns/openshift-apiserver pod/apiserver-r9dw5 node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:08.976 E ns/openshift-dns pod/dns-default-pf7m9 node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:09.365 E ns/openshift-machine-config-operator pod/machine-config-daemon-vgg7w node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:10.568 E ns/openshift-monitoring pod/node-exporter-zrl4g node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:10.964 E ns/openshift-multus pod/multus-6b7k8 node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:11.365 E ns/openshift-sdn pod/sdn-controller-5z8l5 node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:11.765 E ns/openshift-cluster-node-tuning-operator pod/tuned-kpbq7 node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:12.163 E ns/openshift-image-registry pod/node-ca-c6p7s node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:12.563 E ns/openshift-multus pod/multus-admission-controller-dv5bw node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:12.965 E ns/openshift-sdn pod/ovs-5vjnn node/ip-10-0-132-71.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:46:13.208 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-148-225.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/09/20 00:28:12 Watching directory: "/etc/alertmanager/config"\n
Sep 20 00:46:13.208 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-148-225.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/20 00:28:13 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 00:28:13 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 00:28:13 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 00:28:13 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/20 00:28:13 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 00:28:13 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 00:28:13 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 00:28:13 http.go:106: HTTPS: listening on [::]:9095\n
Sep 20 00:46:15.007 E ns/openshift-monitoring pod/openshift-state-metrics-794bc55b5b-qkmtj node/ip-10-0-148-225.us-west-1.compute.internal container=kube-rbac-proxy-self container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:46:15.007 E ns/openshift-monitoring pod/openshift-state-metrics-794bc55b5b-qkmtj node/ip-10-0-148-225.us-west-1.compute.internal container=kube-rbac-proxy-main container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:46:15.007 E ns/openshift-monitoring pod/openshift-state-metrics-794bc55b5b-qkmtj node/ip-10-0-148-225.us-west-1.compute.internal container=openshift-state-metrics container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:46:16.208 E ns/openshift-marketplace pod/redhat-operators-84c4f597d6-nbfp2 node/ip-10-0-148-225.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Sep 20 00:46:17.811 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-225.us-west-1.compute.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:46:17.811 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-225.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:46:17.811 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-225.us-west-1.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:46:17.811 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-225.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:46:17.811 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-225.us-west-1.compute.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:46:17.811 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-148-225.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:46:20.807 E ns/openshift-monitoring pod/prometheus-adapter-6d48fb47f9-w892l node/ip-10-0-148-225.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0920 00:25:46.939578       1 adapter.go:93] successfully using in-cluster auth\nI0920 00:25:47.420093       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 20 00:46:38.738 E ns/openshift-console pod/downloads-78c74495d-69lwl node/ip-10-0-148-225.us-west-1.compute.internal container=download-server container exited with code 137 (Error): 0 00:43:52] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:43:55] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:44:02] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:44:05] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:44:12] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:44:15] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:44:22] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:44:25] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:44:32] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:44:35] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:44:42] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:44:45] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:44:52] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:44:55] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:45:02] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:45:05] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:45:12] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:45:15] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:45:22] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:45:25] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:45:32] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:45:35] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:45:42] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:45:45] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:45:52] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:45:55] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:46:02] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:46:05] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:46:12] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:46:15] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:46:22] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:46:25] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:46:32] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [20/Sep/2020 00:46:35] "GET / HTTP/1.1" 200 -\n
Sep 20 00:46:49.541 E clusteroperator/kube-apiserver changed Degraded to True: NodeControllerDegradedMasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-132-71.us-west-1.compute.internal" not ready since 2020-09-20 00:45:55 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Sep 20 00:46:49.541 E clusteroperator/kube-controller-manager changed Degraded to True: NodeControllerDegradedMasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-132-71.us-west-1.compute.internal" not ready since 2020-09-20 00:45:55 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Sep 20 00:46:49.549 E clusteroperator/kube-scheduler changed Degraded to True: NodeControllerDegradedMasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-132-71.us-west-1.compute.internal" not ready since 2020-09-20 00:45:55 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Sep 20 00:47:03.514 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-d66684796-hh5x8 node/ip-10-0-128-186.us-west-1.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): t\nI0920 00:44:49.238866       1 status.go:25] syncOperatorStatus()\nI0920 00:44:49.274852       1 tuned_controller.go:187] syncServiceAccount()\nI0920 00:44:49.275210       1 tuned_controller.go:214] syncClusterRole()\nI0920 00:44:49.456494       1 tuned_controller.go:245] syncClusterRoleBinding()\nI0920 00:44:49.578352       1 tuned_controller.go:276] syncClusterConfigMap()\nI0920 00:44:49.587434       1 tuned_controller.go:276] syncClusterConfigMap()\nI0920 00:44:49.597038       1 tuned_controller.go:313] syncDaemonSet()\nI0920 00:45:49.061020       1 tuned_controller.go:432] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0920 00:45:49.061075       1 status.go:25] syncOperatorStatus()\nI0920 00:45:49.077362       1 tuned_controller.go:187] syncServiceAccount()\nI0920 00:45:49.077498       1 tuned_controller.go:214] syncClusterRole()\nI0920 00:45:49.156380       1 tuned_controller.go:245] syncClusterRoleBinding()\nI0920 00:45:49.233993       1 tuned_controller.go:276] syncClusterConfigMap()\nI0920 00:45:49.238674       1 tuned_controller.go:276] syncClusterConfigMap()\nI0920 00:45:49.242921       1 tuned_controller.go:313] syncDaemonSet()\nW0920 00:46:21.520275       1 reflector.go:289] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: watch of *v1.ConfigMap ended with: too old resource version: 33302 (37282)\nI0920 00:46:42.290500       1 tuned_controller.go:432] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0920 00:46:42.290640       1 status.go:25] syncOperatorStatus()\nI0920 00:46:42.304923       1 tuned_controller.go:187] syncServiceAccount()\nI0920 00:46:42.305071       1 tuned_controller.go:214] syncClusterRole()\nI0920 00:46:42.396119       1 tuned_controller.go:245] syncClusterRoleBinding()\nI0920 00:46:42.482639       1 tuned_controller.go:276] syncClusterConfigMap()\nI0920 00:46:42.487908       1 tuned_controller.go:276] syncClusterConfigMap()\nI0920 00:46:42.494771       1 tuned_controller.go:313] syncDaemonSet()\nF0920 00:46:59.772361       1 main.go:82] <nil>\n
Sep 20 00:47:14.909 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-b8c85bd4c-bqdcl node/ip-10-0-128-186.us-west-1.compute.internal container=manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:47:21.536 E ns/openshift-service-ca pod/service-serving-cert-signer-5b767f7c98-q27pz node/ip-10-0-128-186.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:47:26.310 E ns/openshift-operator-lifecycle-manager pod/packageserver-b4558b4df-v9nzr node/ip-10-0-132-71.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 00:47:32.124 E ns/openshift-cluster-node-tuning-operator pod/tuned-zh46f node/ip-10-0-137-51.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 0920 00:44:07.583668    2834 openshift-tuned.go:390] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0920 00:44:07.618825    2834 openshift-tuned.go:441] Getting recommended profile...\nI0920 00:44:08.060537    2834 openshift-tuned.go:635] Active profile () != recommended profile (openshift-node)\nI0920 00:44:08.060617    2834 openshift-tuned.go:263] Starting tuned...\n2020-09-20 00:44:08,177 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-09-20 00:44:08,197 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-09-20 00:44:08,197 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-09-20 00:44:08,199 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-09-20 00:44:08,199 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-09-20 00:44:08,245 INFO     tuned.daemon.controller: starting controller\n2020-09-20 00:44:08,245 INFO     tuned.daemon.daemon: starting tuning\n2020-09-20 00:44:08,251 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-09-20 00:44:08,251 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-09-20 00:44:08,254 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-20 00:44:08,258 INFO     tuned.plugins.base: instance disk: assigning devices xvda\n2020-09-20 00:44:08,260 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-09-20 00:44:08,390 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-20 00:44:08,402 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0920 00:44:17.186096    2834 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0920 00:44:17.193874    2834 openshift-tuned.go:881] Pod event watch channel closed.\nI0920 00:44:17.193897    2834 openshift-tuned.go:883] Increasing resyncPeriod to 236\n
Sep 20 00:48:49.617 - 44s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 00:49:18.406 E ns/openshift-monitoring pod/node-exporter-sh5rq node/ip-10-0-148-225.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:49:18.806 E ns/openshift-image-registry pod/node-ca-vtpql node/ip-10-0-148-225.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:49:19.206 E ns/openshift-multus pod/multus-vwqvr node/ip-10-0-148-225.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:49:20.007 E ns/openshift-dns pod/dns-default-pqjrw node/ip-10-0-148-225.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:49:20.606 E ns/openshift-sdn pod/ovs-ksnt9 node/ip-10-0-148-225.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:49:21.205 E ns/openshift-machine-config-operator pod/machine-config-daemon-9lrqb node/ip-10-0-148-225.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:49:21.606 E ns/openshift-cluster-node-tuning-operator pod/tuned-r5nld node/ip-10-0-148-225.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:50:36.298 E ns/openshift-monitoring pod/node-exporter-gmbt9 node/ip-10-0-128-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:50:36.695 E ns/openshift-image-registry pod/node-ca-8nhts node/ip-10-0-128-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:50:37.095 E ns/openshift-multus pod/multus-admission-controller-8tjt6 node/ip-10-0-128-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:50:37.495 E ns/openshift-machine-config-operator pod/machine-config-server-gzvlc node/ip-10-0-128-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:50:38.298 E ns/openshift-apiserver pod/apiserver-njz8j node/ip-10-0-128-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:50:38.695 E ns/openshift-sdn pod/ovs-jwqqh node/ip-10-0-128-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:50:39.094 E ns/openshift-sdn pod/sdn-controller-sdskm node/ip-10-0-128-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:50:39.896 E ns/openshift-dns pod/dns-default-szfmz node/ip-10-0-128-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:50:40.295 E ns/openshift-machine-config-operator pod/machine-config-daemon-l5s2j node/ip-10-0-128-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:50:41.096 E ns/openshift-cluster-node-tuning-operator pod/tuned-g6nsb node/ip-10-0-128-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:50:41.894 E ns/openshift-controller-manager pod/controller-manager-hcfcn node/ip-10-0-128-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:50:42.295 E ns/openshift-multus pod/multus-4f855 node/ip-10-0-128-186.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Sep 20 00:51:34.617 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 00:52:01.007 E ns/openshift-apiserver pod/apiserver-njz8j node/ip-10-0-128-186.us-west-1.compute.internal container=openshift-apiserver container exited with code 255 (Error): pdate addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0920 00:51:40.687822       1 store.go:1319] Monitoring builds.build.openshift.io count at <storage-prefix>//builds\nI0920 00:51:40.688164       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0920 00:51:40.688517       1 client.go:352] parsed scheme: ""\nI0920 00:51:40.688580       1 client.go:352] scheme "" not registered, fallback to default scheme\nI0920 00:51:40.688649       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.openshift-etcd.svc:2379 0  <nil>}]\nI0920 00:51:40.688734       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nW0920 00:51:50.697302       1 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {etcd.openshift-etcd.svc:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup etcd.openshift-etcd.svc on 172.30.0.10:53: no such host". Reconnecting...\nI0920 00:52:00.688771       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []\nI0920 00:52:00.688823       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nW0920 00:52:00.688856       1 asm_amd64.s:1337] Failed to dial etcd.openshift-etcd.svc:2379: grpc: the connection is closing; please retry.\nW0920 00:52:00.688914       1 asm_amd64.s:1337] Failed to dial etcd.openshift-etcd.svc:2379: grpc: the connection is closing; please retry.\nF0920 00:52:00.688893       1 storage_decorator.go:57] Unable to create storage backend: config (&{etcd3 openshift.io {[https://etcd.openshift-etcd.svc:2379] /var/run/secrets/etcd-client/tls.key /var/run/secrets/etcd-client/tls.crt /var/run/configmaps/etcd-serving-ca/ca-bundle.crt} false true {0xc0014d4090 0xc0014d4120} {{build.openshift.io v1} [{build.openshift.io } {build.openshift.io }] false} <nil> 5m0s 1m0s}), err (dial tcp: lookup etcd.openshift-etcd.svc on 172.30.0.10:53: no such host)\n
Sep 20 00:52:34.617 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 00:53:19.617 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 00:53:50.239 E ns/openshift-apiserver pod/apiserver-njz8j node/ip-10-0-128-186.us-west-1.compute.internal container=openshift-apiserver container exited with code 255 (Error): to cc: [{etcd.openshift-etcd.svc:2379 0  <nil>}]\nI0920 00:53:24.161199       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0920 00:53:24.161348       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0920 00:53:29.174508       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0920 00:53:29.174502       1 store.go:1319] Monitoring deploymentconfigs.apps.openshift.io count at <storage-prefix>//deploymentconfigs\nI0920 00:53:29.399338       1 client.go:352] parsed scheme: ""\nI0920 00:53:29.399367       1 client.go:352] scheme "" not registered, fallback to default scheme\nI0920 00:53:29.399421       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.openshift-etcd.svc:2379 0  <nil>}]\nI0920 00:53:29.399505       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nW0920 00:53:44.408507       1 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {etcd.openshift-etcd.svc:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup etcd.openshift-etcd.svc on 172.30.0.10:53: no such host". Reconnecting...\nI0920 00:53:49.399654       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []\nF0920 00:53:49.399646       1 storage_decorator.go:57] Unable to create storage backend: config (&{etcd3 openshift.io {[https://etcd.openshift-etcd.svc:2379] /var/run/secrets/etcd-client/tls.key /var/run/secrets/etcd-client/tls.crt /var/run/configmaps/etcd-serving-ca/ca-bundle.crt} false true {0xc001050a20 0xc001050ab0} {{build.openshift.io v1} [{build.openshift.io } {build.openshift.io }] false} <nil> 5m0s 1m0s}), err (dial tcp: lookup etcd.openshift-etcd.svc on 172.30.0.10:53: no such host)\nW0920 00:53:49.399719       1 asm_amd64.s:1337] Failed to dial etcd.openshift-etcd.svc:2379: grpc: the connection is closing; please retry.\n
Sep 20 00:54:49.617 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 00:54:52.779 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Alertmanager host: getting Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io alertmanager-main)
Sep 20 00:54:52.788 E clusteroperator/authentication changed Degraded to True: OAuthClientsDegradedError: OAuthClientsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io openshift-challenging-client)
Sep 20 00:55:49.617 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 00:56:34.221 E clusteroperator/console changed Degraded to True: OAuthClientSyncDegradedFailedGet: OAuthClientSyncDegraded: oauth client for console does not exist and cannot be created (the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console))
Sep 20 00:56:54.642 E ns/openshift-apiserver pod/apiserver-njz8j node/ip-10-0-128-186.us-west-1.compute.internal container=openshift-apiserver container exited with code 255 (Error): eLimitRange,image.openshift.io/ImagePolicy,quota.openshift.io/ClusterResourceQuota,ValidatingAdmissionWebhook,ResourceQuota.\nI0920 00:56:24.227704       1 client.go:352] parsed scheme: ""\nI0920 00:56:24.227727       1 client.go:352] scheme "" not registered, fallback to default scheme\nI0920 00:56:24.227796       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.openshift-etcd.svc:2379 0  <nil>}]\nI0920 00:56:24.227866       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0920 00:56:34.242744       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0920 00:56:34.243118       1 client.go:352] parsed scheme: ""\nI0920 00:56:34.243139       1 client.go:352] scheme "" not registered, fallback to default scheme\nI0920 00:56:34.243186       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.openshift-etcd.svc:2379 0  <nil>}]\nI0920 00:56:34.243243       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nW0920 00:56:49.254128       1 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {etcd.openshift-etcd.svc:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup etcd.openshift-etcd.svc on 172.30.0.10:53: no such host". Reconnecting...\nI0920 00:56:54.243848       1 controlbuf.go:382] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nF0920 00:56:54.243412       1 storage_decorator.go:57] Unable to create storage backend: config (&{etcd3 openshift.io {[https://etcd.openshift-etcd.svc:2379] /var/run/secrets/etcd-client/tls.key /var/run/secrets/etcd-client/tls.crt /var/run/configmaps/etcd-serving-ca/ca-bundle.crt} false true {0xc0008bda70 0xc0008bdb00} {{apps.openshift.io v1} [{apps.openshift.io } {apps.openshift.io }] false} <nil> 5m0s 1m0s}), err (dial tcp: lookup etcd.openshift-etcd.svc on 172.30.0.10:53: no such host)\n
Sep 20 00:58:19.617 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 00:59:04.617 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 00:59:19.045 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Sep 20 00:59:49.617 - 29s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 01:01:04.617 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 01:02:34.617 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 01:03:34.617 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 01:03:45.154 E clusteroperator/console changed Degraded to True: OAuthClientSyncDegradedFailedGet: OAuthClientSyncDegraded: oauth client for console does not exist and cannot be created (the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io console))
Sep 20 01:04:04.617 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 20 01:04:46.781 E ns/openshift-marketplace pod/certified-operators-5dcd88fc4-6szzr node/ip-10-0-141-28.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Sep 20 01:04:53.774 E ns/openshift-operator-lifecycle-manager pod/packageserver-6f6b47b64c-6nwh7 node/ip-10-0-128-186.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 01:04:54.886 E ns/openshift-operator-lifecycle-manager pod/packageserver-6b48b56f69-4cg75 node/ip-10-0-151-235.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 20 01:05:06.652 E ns/openshift-operator-lifecycle-manager pod/packageserver-7b9d6dc6b-27wmt node/ip-10-0-132-71.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated