ResultSUCCESS
Tests 2 failed / 983 succeeded
Started2020-09-19 21:11
Elapsed1h39m
Work namespaceci-op-e0260344
poda7b6e2c5-fabc-11ea-a1fd-0a580a800db2
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 23m56s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
1 error level events were detected during this test run:

Sep 19 22:35:16.585 E ns/default pod/recycler-for-nfs-2748w node/ip-10-0-233-113.us-west-1.compute.internal pod failed (DeadlineExceeded): Pod was active on the node longer than the specified deadline

				
				Click to see stdout/stderrfrom junit_e2e_20200919-224026.xml

Find failed mentions in log files


openshift-tests Monitor cluster while tests execute 29m12s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
42 error level events were detected during this test run:

Sep 19 21:47:28.482 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-239-137.us-west-1.compute.internal node/ip-10-0-239-137.us-west-1.compute.internal container=kube-controller-manager container exited with code 255 (Error): okmarks=true&resourceVersion=17784&timeout=8m41s&timeoutSeconds=521&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:27.111399       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?allowWatchBookmarks=true&resourceVersion=15946&timeout=5m54s&timeoutSeconds=354&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:27.112499       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/monitoring.coreos.com/v1/prometheuses?allowWatchBookmarks=true&resourceVersion=19548&timeout=8m11s&timeoutSeconds=491&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:27.113514       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/schedulers?allowWatchBookmarks=true&resourceVersion=15944&timeout=5m36s&timeoutSeconds=336&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:27.114567       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=16360&timeout=9m5s&timeoutSeconds=545&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0919 21:47:27.812375       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nI0919 21:47:27.812478       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-239-137_2f7afade-e1d9-4a82-afb4-af1894642f4d stopped leading\nF0919 21:47:27.812498       1 controllermanager.go:291] leaderelection lost\n
Sep 19 21:47:29.418 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-239-137.us-west-1.compute.internal node/ip-10-0-239-137.us-west-1.compute.internal container=kube-scheduler container exited with code 255 (Error):  https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=22113&timeout=9m11s&timeoutSeconds=551&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:27.947038       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=21807&timeout=7m28s&timeoutSeconds=448&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:27.952462       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=15431&timeout=6m59s&timeoutSeconds=419&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:27.953366       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=22055&timeoutSeconds=540&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:27.955138       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=15431&timeout=9m10s&timeoutSeconds=550&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:27.956315       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=15446&timeout=9m41s&timeoutSeconds=581&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0919 21:47:28.396647       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0919 21:47:28.396765       1 server.go:257] leaderelection lost\n
Sep 19 21:47:53.611 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-239-137.us-west-1.compute.internal node/ip-10-0-239-137.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): ift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=20622&timeout=9m24s&timeoutSeconds=564&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:52.993262       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?allowWatchBookmarks=true&resourceVersion=22088&timeout=8m14s&timeoutSeconds=494&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:52.995468       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=20448&timeout=6m9s&timeoutSeconds=369&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:53.010727       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config/secrets?allowWatchBookmarks=true&resourceVersion=21660&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:53.011801       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=16481&timeout=7m52s&timeoutSeconds=472&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:47:53.013095       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=20262&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0919 21:47:53.234089       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0919 21:47:53.234136       1 leaderelection.go:67] leaderelection lost\n
Sep 19 21:47:53.616 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-239-137.us-west-1.compute.internal node/ip-10-0-239-137.us-west-1.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 19 21:52:55.018 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-548c9dc844-zlftj node/ip-10-0-239-137.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "ip-10-0-239-137.us-west-1.compute.internal" from revision 6 to 7 because static pod is ready\nI0919 21:49:39.449132       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"b0e30d93-7b69-4343-9530-75ae9614050f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7"\nI0919 21:49:41.626426       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"b0e30d93-7b69-4343-9530-75ae9614050f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-7 -n openshift-kube-apiserver:\ncause by changes in data.status\nI0919 21:49:46.428128       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"b0e30d93-7b69-4343-9530-75ae9614050f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-7-ip-10-0-239-137.us-west-1.compute.internal -n openshift-kube-apiserver because it was missing\nI0919 21:52:53.906931       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0919 21:52:53.907158       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nF0919 21:52:53.908103       1 builder.go:209] server exited\n
Sep 19 21:56:20.934 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-239-137.us-west-1.compute.internal node/ip-10-0-239-137.us-west-1.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 19 21:57:33.359 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-6765867dbc-fzr42 node/ip-10-0-239-137.us-west-1.compute.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): erator/kube-storage-version-migrator changed: {"conditions":[{"type":"Degraded","status":"False","lastTransitionTime":"2020-09-19T21:29:32Z","reason":"AsExpected"},{"type":"Progressing","status":"False","lastTransitionTime":"2020-09-19T21:29:33Z","reason":"AsExpected"},{"type":"Available","status":"False","lastTransitionTime":"2020-09-19T21:29:32Z","reason":"_NoMigratorPod","message":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-09-19T21:29:31Z","reason":"NoData"}],"versions":[{"name":"operator","version":"4.4.0-0.nightly-2020-09-19-160530"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0919 21:39:33.599194       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"963ab708-c415-4169-bcd9-ddf10a5e7186", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0919 21:57:32.372693       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 21:57:32.372749       1 leaderelection.go:66] leaderelection lost\nF0919 21:57:32.375269       1 builder.go:210] server exited\n
Sep 19 21:58:50.275 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-34.us-west-1.compute.internal node/ip-10-0-152-34.us-west-1.compute.internal container=kube-controller-manager container exited with code 255 (Error): ction refused\nE0919 21:58:49.625668       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://localhost:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=24341&timeout=5m30s&timeoutSeconds=330&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:58:49.626776       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operators.coreos.com/v1alpha1/catalogsources?allowWatchBookmarks=true&resourceVersion=22127&timeout=8m3s&timeoutSeconds=483&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:58:49.627928       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/monitoring.coreos.com/v1/prometheuses?allowWatchBookmarks=true&resourceVersion=24341&timeout=9m59s&timeoutSeconds=599&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:58:49.629315       1 reflector.go:307] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.RangeAllocation: Get https://localhost:6443/apis/security.openshift.io/v1/rangeallocations?allowWatchBookmarks=true&resourceVersion=26306&timeout=6m57s&timeoutSeconds=417&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:58:49.630458       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=27198&timeout=9m41s&timeoutSeconds=581&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0919 21:58:49.792207       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0919 21:58:49.792308       1 controllermanager.go:291] leaderelection lost\n
Sep 19 21:58:50.276 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-152-34.us-west-1.compute.internal node/ip-10-0-152-34.us-west-1.compute.internal container=kube-scheduler container exited with code 255 (Error): :58:48.385420       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=26353&timeout=8m56s&timeoutSeconds=536&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:58:48.387857       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=26353&timeout=5m20s&timeoutSeconds=320&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:58:48.389329       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=25296&timeout=8m31s&timeoutSeconds=511&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:58:48.390912       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=25295&timeout=6m9s&timeoutSeconds=369&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:58:48.393428       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=22005&timeout=8m26s&timeoutSeconds=506&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0919 21:58:49.277794       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0919 21:58:49.277829       1 server.go:257] leaderelection lost\n
Sep 19 21:59:06.603 E ns/openshift-insights pod/insights-operator-5d47b754fb-xvtd7 node/ip-10-0-169-195.us-west-1.compute.internal container=operator container exited with code 2 (Error): 919 21:58:18.517803       1 diskrecorder.go:63] Recording config/ingress with fingerprint=\nI0919 21:58:18.520056       1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0919 21:58:18.528077       1 diskrecorder.go:176] Writing 70 records to /var/lib/insights-operator/insights-2020-09-19-215818.tar.gz\nI0919 21:58:18.535072       1 diskrecorder.go:140] Wrote 70 records to disk in 7ms\nI0919 21:58:18.535103       1 periodic.go:151] Periodic gather config completed in 101ms\nI0919 21:58:25.805176       1 httplog.go:90] GET /metrics: (5.276561ms) 200 [Prometheus/2.15.2 10.128.2.14:36098]\nI0919 21:58:27.933855       1 httplog.go:90] GET /metrics: (2.056085ms) 200 [Prometheus/2.15.2 10.129.2.14:50752]\nI0919 21:58:39.305712       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0919 21:58:39.323243       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0919 21:58:39.657933       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 24432 (27094)\nI0919 21:58:39.658146       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 25153 (27094)\nI0919 21:58:40.663078       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0919 21:58:40.663471       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0919 21:58:55.806626       1 httplog.go:90] GET /metrics: (6.424164ms) 200 [Prometheus/2.15.2 10.128.2.14:36098]\nI0919 21:58:57.934482       1 httplog.go:90] GET /metrics: (2.514257ms) 200 [Prometheus/2.15.2 10.129.2.14:50752]\n
Sep 19 21:59:14.102 E ns/openshift-kube-storage-version-migrator pod/migrator-686fc6cc66-45jr4 node/ip-10-0-174-194.us-west-1.compute.internal container=migrator container exited with code 2 (Error): 
Sep 19 21:59:20.581 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-34.us-west-1.compute.internal node/ip-10-0-152-34.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): -manager-operator/configmaps?allowWatchBookmarks=true&resourceVersion=27203&timeout=7m12s&timeoutSeconds=432&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:59:19.356901       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=26155&timeout=9m3s&timeoutSeconds=543&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:59:19.363860       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=26155&timeout=9m8s&timeoutSeconds=548&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:59:19.366519       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=27227&timeout=8m42s&timeoutSeconds=522&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:59:19.368310       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=25318&timeout=7m26s&timeoutSeconds=446&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:59:19.399505       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=25318&timeout=5m0s&timeoutSeconds=300&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0919 21:59:20.113543       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0919 21:59:20.113616       1 leaderelection.go:67] leaderelection lost\n
Sep 19 21:59:20.581 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-152-34.us-west-1.compute.internal node/ip-10-0-152-34.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): ory.go:135: Failed to watch *v1.Job: Get https://localhost:6443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=23597&timeout=9m44s&timeoutSeconds=584&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:59:19.571555       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://localhost:6443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=24341&timeout=8m23s&timeoutSeconds=503&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:59:19.573473       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://localhost:6443/apis/networking.k8s.io/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=24814&timeout=9m27s&timeoutSeconds=567&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:59:19.574746       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=25296&timeout=6m0s&timeoutSeconds=360&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:59:19.577373       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://localhost:6443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=24341&timeout=5m14s&timeoutSeconds=314&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 21:59:19.579251       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=24341&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0919 21:59:20.301156       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0919 21:59:20.301217       1 policy_controller.go:94] leaderelection lost\n
Sep 19 21:59:21.745 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-152-34.us-west-1.compute.internal node/ip-10-0-152-34.us-west-1.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 19 21:59:24.200 E ns/openshift-monitoring pod/node-exporter-77lz7 node/ip-10-0-239-137.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): -19T21:34:34Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T21:34:34Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 21:59:24.252 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-69f9mtn78 node/ip-10-0-239-137.us-west-1.compute.internal container=operator container exited with code 255 (Error): resync\nI0919 21:57:18.258846       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0919 21:57:18.261468       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0919 21:57:18.267730       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0919 21:57:18.297841       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0919 21:57:19.854701       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0919 21:57:23.901768       1 httplog.go:90] GET /metrics: (8.090578ms) 200 [Prometheus/2.15.2 10.129.2.14:43580]\nI0919 21:57:37.588157       1 httplog.go:90] GET /metrics: (8.011183ms) 200 [Prometheus/2.15.2 10.128.2.14:60608]\nI0919 21:57:53.901942       1 httplog.go:90] GET /metrics: (8.332091ms) 200 [Prometheus/2.15.2 10.129.2.14:43580]\nI0919 21:58:07.587846       1 httplog.go:90] GET /metrics: (7.897344ms) 200 [Prometheus/2.15.2 10.128.2.14:60608]\nI0919 21:58:23.904451       1 httplog.go:90] GET /metrics: (10.84845ms) 200 [Prometheus/2.15.2 10.129.2.14:43580]\nI0919 21:58:37.587848       1 httplog.go:90] GET /metrics: (7.826828ms) 200 [Prometheus/2.15.2 10.128.2.14:60608]\nI0919 21:58:53.903725       1 httplog.go:90] GET /metrics: (10.207848ms) 200 [Prometheus/2.15.2 10.129.2.14:43580]\nI0919 21:59:00.337978       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.ClusterOperator total 114 items received\nI0919 21:59:07.594076       1 httplog.go:90] GET /metrics: (13.298438ms) 200 [Prometheus/2.15.2 10.128.2.14:60608]\nI0919 21:59:12.836779       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0919 21:59:12.837617       1 status_controller.go:212] Shutting down StatusSyncer-service-catalog-controller-manager\nI0919 21:59:12.837800       1 operator.go:227] Shutting down ServiceCatalogControllerManagerOperator\nF0919 21:59:12.837878       1 builder.go:243] stopped\n
Sep 19 21:59:27.283 E ns/openshift-authentication-operator pod/authentication-operator-7d695f58f6-cb87l node/ip-10-0-239-137.us-west-1.compute.internal container=operator container exited with code 255 (Error): rue","type":"Available"},{"lastTransitionTime":"2020-09-19T21:29:34Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0919 21:59:14.249611       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"597659e2-849c-4282-bc13-75ec56fea9be", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("Progressing: deployment's observed generation did not reach the expected generation")\nI0919 21:59:19.000621       1 status_controller.go:176] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-19T21:39:51Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-19T21:59:14Z","message":"Progressing: not all deployment replicas are ready","reason":"_OAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-09-19T21:46:15Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-19T21:29:34Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0919 21:59:19.010457       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"597659e2-849c-4282-bc13-75ec56fea9be", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "Progressing: deployment's observed generation did not reach the expected generation" to "Progressing: not all deployment replicas are ready"\nI0919 21:59:26.268939       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0919 21:59:26.269043       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0919 21:59:26.269207       1 builder.go:210] server exited\n
Sep 19 21:59:35.521 E ns/openshift-controller-manager pod/controller-manager-7h6pk node/ip-10-0-239-137.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): erver ("unable to decode an event from the watch stream: stream error: stream ID 409; INTERNAL_ERROR") has prevented the request from succeeding\nW0919 21:56:08.502491       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 413; INTERNAL_ERROR") has prevented the request from succeeding\nW0919 21:56:08.504253       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 311; INTERNAL_ERROR") has prevented the request from succeeding\nW0919 21:56:38.484278       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 439; INTERNAL_ERROR") has prevented the request from succeeding\nW0919 21:56:38.484526       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 441; INTERNAL_ERROR") has prevented the request from succeeding\nW0919 21:56:38.484636       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 447; INTERNAL_ERROR") has prevented the request from succeeding\nW0919 21:56:38.484718       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 449; INTERNAL_ERROR") has prevented the request from succeeding\nW0919 21:56:38.484795       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 303; INTERNAL_ERROR") has prevented the request from succeeding\n
Sep 19 21:59:38.702 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-233-113.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/09/19 21:42:06 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 19 21:59:38.702 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-233-113.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/09/19 21:42:06 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 21:42:06 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 21:42:06 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 21:42:06 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/19 21:42:06 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 21:42:06 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 21:42:06 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 21:42:06 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/19 21:42:06 http.go:107: HTTPS: listening on [::]:9091\nI0919 21:42:06.731242       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 21:58:18 oauthproxy.go:774: basicauth: 10.130.0.21:46162 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 21:58:18 oauthproxy.go:774: basicauth: 10.130.0.21:46162 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 21:59:36 oauthproxy.go:774: basicauth: 10.130.0.60:59416 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 21:59:36 oauthproxy.go:774: basicauth: 10.130.0.60:59416 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 19 21:59:38.702 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-233-113.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-19T21:42:06.127647437Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.35.1'."\nlevel=info ts=2020-09-19T21:42:06.127802091Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-09-19T21:42:06.129556937Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-19T21:42:11.27277243Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 19 21:59:55.486 E ns/openshift-service-ca-operator pod/service-ca-operator-55cb78db57-599wr node/ip-10-0-239-137.us-west-1.compute.internal container=operator container exited with code 255 (Error): 
Sep 19 22:00:59.141 E ns/openshift-monitoring pod/node-exporter-lld5w node/ip-10-0-152-34.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): -19T21:34:30Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T21:34:30Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 22:01:22.057 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-233-113.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T22:00:41.066Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T22:00:41.073Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T22:00:41.074Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T22:00:41.075Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T22:00:41.075Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T22:00:41.075Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T22:00:41.075Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T22:00:41.075Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T22:00:41.075Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T22:00:41.075Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T22:00:41.075Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T22:00:41.075Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T22:00:41.075Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T22:00:41.075Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T22:00:41.076Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T22:00:41.076Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 22:01:26.557 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-174-194.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/09/19 21:42:26 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/09/19 21:42:29 config map updated\n2020/09/19 21:42:29 successfully triggered reload\n2020/09/19 21:42:30 config map updated\n2020/09/19 21:42:30 successfully triggered reload\n
Sep 19 22:01:26.557 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-174-194.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/09/19 21:42:27 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 21:42:27 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 21:42:27 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 21:42:27 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/19 21:42:27 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 21:42:27 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 21:42:27 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 21:42:27 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0919 21:42:27.163416       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 21:42:27 http.go:107: HTTPS: listening on [::]:9091\n2020/09/19 21:44:55 oauthproxy.go:774: basicauth: 10.128.2.4:33786 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 21:49:26 oauthproxy.go:774: basicauth: 10.128.2.4:38350 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 21:53:56 oauthproxy.go:774: basicauth: 10.128.2.4:42648 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 21:58:27 oauthproxy.go:774: basicauth: 10.128.2.4:47184 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 22:00:47 oa
Sep 19 22:01:26.557 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-174-194.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-19T21:42:26.567752167Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.35.1'."\nlevel=info ts=2020-09-19T21:42:26.567917807Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-09-19T21:42:26.569573516Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-19T21:42:31.702339982Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 19 22:01:30.579 E ns/openshift-monitoring pod/node-exporter-kbml6 node/ip-10-0-174-194.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): -19T21:38:47Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T21:38:47Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 22:01:48.674 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-174-204.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T22:01:43.519Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T22:01:43.521Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T22:01:43.521Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T22:01:43.522Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T22:01:43.522Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T22:01:43.522Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T22:01:43.522Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T22:01:43.522Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T22:01:43.522Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T22:01:43.522Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T22:01:43.523Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T22:01:43.523Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T22:01:43.523Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T22:01:43.523Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T22:01:43.523Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T22:01:43.523Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 22:02:08.815 E ns/openshift-console pod/console-8699f56694-mrk7h node/ip-10-0-152-34.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020-09-19T21:39:45Z cmd/main: cookies are secure!\n2020-09-19T21:39:45Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-19T21:39:55Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-19T21:40:05Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-19T21:40:15Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-19T21:40:25Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-19T21:40:35Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-19T21:40:45Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-19T21:40:55Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-19T21:41:05Z cmd/main: Binding to [::]:8443...\n2020-09-19T21:41:05Z cmd/main: using TLS\n
Sep 19 22:02:34.909 E ns/openshift-sdn pod/sdn-controller-g6fq9 node/ip-10-0-152-34.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0919 21:28:47.428740       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0919 21:33:11.841025       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-e0260344-57484.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Sep 19 22:02:49.549 E ns/openshift-sdn pod/sdn-controller-ft9sc node/ip-10-0-169-195.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): ated HostSubnet ip-10-0-174-194.us-west-1.compute.internal (host: "ip-10-0-174-194.us-west-1.compute.internal", ip: "10.0.174.194", subnet: "10.128.2.0/23")\nI0919 21:38:16.075339       1 subnets.go:149] Created HostSubnet ip-10-0-233-113.us-west-1.compute.internal (host: "ip-10-0-233-113.us-west-1.compute.internal", ip: "10.0.233.113", subnet: "10.129.2.0/23")\nE0919 21:38:43.453221       1 leaderelection.go:367] Failed to update lock: Put https://api-int.ci-op-e0260344-57484.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: read tcp 10.0.169.195:34616->10.0.251.153:6443: read: connection reset by peer\nI0919 21:46:45.435794       1 vnids.go:115] Allocated netid 13112664 for namespace "e2e-openshift-api-available-1052"\nI0919 21:46:45.454690       1 vnids.go:115] Allocated netid 1591766 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-4981"\nI0919 21:46:45.483210       1 vnids.go:115] Allocated netid 10537485 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-2533"\nI0919 21:46:45.503727       1 vnids.go:115] Allocated netid 6743755 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-4086"\nI0919 21:46:45.528109       1 vnids.go:115] Allocated netid 15979034 for namespace "e2e-frontend-ingress-available-1782"\nI0919 21:46:45.620819       1 vnids.go:115] Allocated netid 11544184 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-9548"\nI0919 21:46:45.698638       1 vnids.go:115] Allocated netid 10839662 for namespace "e2e-kubernetes-api-available-5188"\nI0919 21:46:45.723724       1 vnids.go:115] Allocated netid 10962861 for namespace "e2e-check-for-critical-alerts-8084"\nI0919 21:46:45.757397       1 vnids.go:115] Allocated netid 7931919 for namespace "e2e-k8s-sig-apps-deployment-upgrade-2061"\nI0919 21:46:45.795905       1 vnids.go:115] Allocated netid 13826070 for namespace "e2e-k8s-service-lb-available-7755"\nI0919 21:46:45.821592       1 vnids.go:115] Allocated netid 13269918 for namespace "e2e-k8s-sig-apps-job-upgrade-57"\n
Sep 19 22:03:03.373 E ns/openshift-multus pod/multus-njcnv node/ip-10-0-233-113.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 22:03:09.710 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-169-195.us-west-1.compute.internal node/ip-10-0-169-195.us-west-1.compute.internal container=kube-controller-manager container exited with code 255 (Error): nshift.io/v1/templates?allowWatchBookmarks=true&resourceVersion=29081&timeout=6m34s&timeoutSeconds=394&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 22:03:08.111623       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/snapshot.storage.k8s.io/v1beta1/volumesnapshotclasses?allowWatchBookmarks=true&resourceVersion=22121&timeout=5m33s&timeoutSeconds=333&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 22:03:08.112646       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRole: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=29341&timeout=8m34s&timeoutSeconds=514&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 22:03:08.114113       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/machine.openshift.io/v1beta1/machinehealthchecks?allowWatchBookmarks=true&resourceVersion=25675&timeout=7m45s&timeoutSeconds=465&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 22:03:08.115152       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies?allowWatchBookmarks=true&resourceVersion=25295&timeout=5m40s&timeoutSeconds=340&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 22:03:08.309272       1 cronjob_controller.go:125] Failed to extract job list: Get https://localhost:6443/apis/batch/v1/jobs?limit=500: dial tcp [::1]:6443: connect: connection refused\nI0919 22:03:08.738169       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0919 22:03:08.738287       1 controllermanager.go:291] leaderelection lost\n
Sep 19 22:03:25.652 E ns/openshift-sdn pod/sdn-controller-g99hk node/ip-10-0-239-137.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0919 21:28:48.230803       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0919 21:31:43.597362       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\nE0919 21:33:11.834363       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-e0260344-57484.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Sep 19 22:03:25.878 E ns/openshift-sdn pod/sdn-76sp6 node/ip-10-0-174-194.us-west-1.compute.internal container=sdn container exited with code 255 (Error):  2058 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0919 22:02:32.286465    2058 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.17:8443 10.129.0.4:8443]\nI0919 22:02:32.286476    2058 roundrobin.go:217] Delete endpoint 10.130.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0919 22:02:32.416379    2058 proxier.go:368] userspace proxy: processing 0 service events\nI0919 22:02:32.416406    2058 proxier.go:347] userspace syncProxyRules took 28.128594ms\nI0919 22:03:02.536052    2058 proxier.go:368] userspace proxy: processing 0 service events\nI0919 22:03:02.536078    2058 proxier.go:347] userspace syncProxyRules took 27.681431ms\nI0919 22:03:14.797291    2058 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.152.34:10257 10.0.239.137:10257]\nI0919 22:03:14.797331    2058 roundrobin.go:217] Delete endpoint 10.0.169.195:10257 for service "openshift-kube-controller-manager/kube-controller-manager:https"\nI0919 22:03:14.928314    2058 proxier.go:368] userspace proxy: processing 0 service events\nI0919 22:03:14.928341    2058 proxier.go:347] userspace syncProxyRules took 28.34361ms\nI0919 22:03:19.522751    2058 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0919 22:03:20.822548    2058 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.152.34:10257 10.0.169.195:10257 10.0.239.137:10257]\nI0919 22:03:20.940925    2058 proxier.go:368] userspace proxy: processing 0 service events\nI0919 22:03:20.940948    2058 proxier.go:347] userspace syncProxyRules took 28.074337ms\nF0919 22:03:25.041010    2058 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 19 22:03:34.742 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-195.us-west-1.compute.internal node/ip-10-0-169-195.us-west-1.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 19 22:04:23.518 E ns/openshift-sdn pod/sdn-lrt5m node/ip-10-0-233-113.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ice-test:" (:30018/tcp)\nI0919 22:04:13.065326   77915 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:32676/tcp)\nI0919 22:04:13.065789   77915 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:30072/tcp)\nI0919 22:04:13.089903   77915 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 31753\nI0919 22:04:13.210283   77915 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0919 22:04:13.210327   77915 cmd.go:173] openshift-sdn network plugin registering startup\nI0919 22:04:13.210435   77915 cmd.go:177] openshift-sdn network plugin ready\nI0919 22:04:13.524746   77915 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0919 22:04:14.155078   77915 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0919 22:04:14.940804   77915 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0919 22:04:15.922554   77915 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0919 22:04:17.147773   77915 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0919 22:04:18.678677   77915 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0919 22:04:20.591055   77915 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0919 22:04:22.980460   77915 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0919 22:04:23.010573   77915 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 19 22:04:54.038 E ns/openshift-sdn pod/sdn-q7c25 node/ip-10-0-239-137.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 5693  107018 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:32676/tcp)\nI0919 22:04:28.533888  107018 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 31753\nI0919 22:04:28.546825  107018 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0919 22:04:28.546891  107018 cmd.go:173] openshift-sdn network plugin registering startup\nI0919 22:04:28.546999  107018 cmd.go:177] openshift-sdn network plugin ready\nI0919 22:04:44.484398  107018 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.17:6443 10.129.0.65:6443 10.130.0.73:6443]\nI0919 22:04:44.484441  107018 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.17:8443 10.129.0.65:8443 10.130.0.73:8443]\nI0919 22:04:44.507373  107018 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.65:6443 10.130.0.73:6443]\nI0919 22:04:44.507477  107018 roundrobin.go:217] Delete endpoint 10.128.0.17:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0919 22:04:44.507544  107018 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.65:8443 10.130.0.73:8443]\nI0919 22:04:44.507589  107018 roundrobin.go:217] Delete endpoint 10.128.0.17:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0919 22:04:44.713387  107018 proxier.go:368] userspace proxy: processing 0 service events\nI0919 22:04:44.713504  107018 proxier.go:347] userspace syncProxyRules took 52.50357ms\nI0919 22:04:44.902242  107018 proxier.go:368] userspace proxy: processing 0 service events\nI0919 22:04:44.902357  107018 proxier.go:347] userspace syncProxyRules took 39.418451ms\nF0919 22:04:53.170196  107018 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 19 22:06:46.508 E ns/openshift-multus pod/multus-9hcwh node/ip-10-0-174-204.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 22:07:49.281 E ns/openshift-multus pod/multus-s8r9m node/ip-10-0-152-34.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 22:09:03.208 E ns/openshift-multus pod/multus-bsj9z node/ip-10-0-169-195.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error):