ResultSUCCESS
Tests 4 failed / 67 succeeded
Started2020-09-16 01:33
Elapsed1h46m
Work namespaceci-op-dytt3k0l
Refs master:b14d7ce0
247:6677b81c
pod807ed7af-f7bc-11ea-bbdb-0a580a800cc3
repoopenshift/cloud-credential-operator
revision1

Test Failures


Cluster upgrade [sig-api-machinery] OAuth APIs remain available 47m19s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-api\-machinery\]\sOAuth\sAPIs\sremain\savailable$'
API "oauth-api-available" was unreachable during disruption for at least 1s of 47m19s (0%), this is currently sufficient to pass the test/job but not considered completely correct:

Sep 16 02:58:14.575 I oauth-apiserver OAuth API stopped responding to GET requests: Get "https://api.ci-op-dytt3k0l-21e62.origin-ci-int-aws.dev.rhcloud.com:6443/apis/oauth.openshift.io/v1/oauthaccesstokens/missing?timeout=15s": dial tcp 3.216.192.101:6443: connect: connection refused
Sep 16 02:58:15.535 E oauth-apiserver OAuth API is not responding to GET requests
Sep 16 02:58:15.757 I oauth-apiserver OAuth API started responding to GET requests
				from junit_upgrade_1600225904.xml

Filter through log files


Cluster upgrade [sig-api-machinery] OpenShift APIs remain available 47m19s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-api\-machinery\]\sOpenShift\sAPIs\sremain\savailable$'
API "openshift-api-available" was unreachable during disruption for at least 1s of 47m18s (0%), this is currently sufficient to pass the test/job but not considered completely correct:

Sep 16 02:27:51.967 I openshift-apiserver OpenShift API stopped responding to GET requests: Get "https://api.ci-op-dytt3k0l-21e62.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s": dial tcp 54.152.157.222:6443: connect: connection refused
Sep 16 02:27:52.923 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 16 02:27:52.950 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1600225904.xml

Filter through log files


Cluster upgrade [sig-network-edge] Application behind service load balancer with PDB is not disrupted 47m49s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-network\-edge\]\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 5s of 43m10s (0%), this is currently sufficient to pass the test/job but not considered completely correct:

Sep 16 02:54:18.264 E ns/e2e-k8s-service-lb-available-1152 svc/service-test Service stopped responding to GET requests over new connections
Sep 16 02:54:19.264 - 3s    E ns/e2e-k8s-service-lb-available-1152 svc/service-test Service is not responding to GET requests over new connections
Sep 16 02:54:23.745 I ns/e2e-k8s-service-lb-available-1152 svc/service-test Service started responding to GET requests over new connections
Sep 16 03:03:19.264 E ns/e2e-k8s-service-lb-available-1152 svc/service-test Service stopped responding to GET requests over new connections
Sep 16 03:03:19.331 I ns/e2e-k8s-service-lb-available-1152 svc/service-test Service started responding to GET requests over new connections
				from junit_upgrade_1600225904.xml

Filter through log files


openshift-tests [sig-arch] Monitor cluster while tests execute 52m35s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-arch\]\sMonitor\scluster\swhile\stests\sexecute$'
107 error level events were detected during this test run:

Sep 16 02:25:21.567 E kube-apiserver Kube API started failing: Get "https://api.ci-op-dytt3k0l-21e62.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 16 02:28:29.075 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-190-136.ec2.internal node/ip-10-0-190-136.ec2.internal container/setup init container exited with code 124 (Error): ................................................................................
Sep 16 02:36:30.161 E ns/openshift-machine-api pod/machine-api-operator-67f8d8d457-x6ls4 node/ip-10-0-190-136.ec2.internal container/machine-api-operator container exited with code 2 (Error): 
Sep 16 02:39:09.838 E ns/openshift-cluster-machine-approver pod/machine-approver-678bb79b8f-hj7rv node/ip-10-0-190-136.ec2.internal container/machine-approver-controller container exited with code 2 (Error): v1beta1.CertificateSigningRequest: Get "https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32493&timeoutSeconds=368&watch=true": dial tcp 127.0.0.1:6443: connect: connection refused\nE0916 02:28:47.810644       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:99: Failed to watch *v1.ClusterOperator: Get "https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=42616&timeoutSeconds=303&watch=true": dial tcp 127.0.0.1:6443: connect: connection refused\nE0916 02:28:47.811616       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get "https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32493&timeoutSeconds=584&watch=true": dial tcp 127.0.0.1:6443: connect: connection refused\nE0916 02:28:48.811074       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:99: Failed to watch *v1.ClusterOperator: Get "https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=42616&timeoutSeconds=569&watch=true": dial tcp 127.0.0.1:6443: connect: connection refused\nE0916 02:28:48.812018       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get "https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32493&timeoutSeconds=588&watch=true": dial tcp 127.0.0.1:6443: connect: connection refused\nE0916 02:28:54.589691       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:99: Failed to watch *v1.ClusterOperator: the server is currently unable to handle the request (get clusteroperators.config.openshift.io)\n
Sep 16 02:39:32.451 E ns/openshift-monitoring pod/node-exporter-7b2z9 node/ip-10-0-226-214.ec2.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-09-16T02:08:46.359Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-09-16T02:08:46.359Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-09-16T02:08:46.359Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-09-16T02:08:46.359Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-09-16T02:08:46.359Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-09-16T02:08:46.359Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-09-16T02:08:46.360Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-09-16T02:08:46.361Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-09-16T02:08:46.361Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Sep 16 02:39:38.973 E ns/openshift-monitoring pod/kube-state-metrics-594cbc678d-n4hbj node/ip-10-0-174-42.ec2.internal container/kube-state-metrics container exited with code 2 (Error): 
Sep 16 02:39:40.054 E ns/openshift-monitoring pod/openshift-state-metrics-65545fb8cf-xm24q node/ip-10-0-174-42.ec2.internal container/openshift-state-metrics container exited with code 2 (Error): 
Sep 16 02:39:42.529 E ns/openshift-controller-manager pod/controller-manager-bflhc node/ip-10-0-155-169.ec2.internal container/controller-manager container exited with code 137 (Error): /tools/cache/reflector.go:156: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 1; INTERNAL_ERROR") has prevented the request from succeeding\nW0916 02:37:13.591947       1 reflector.go:424] k8s.io/client-go@v0.19.0-rc.3/tools/cache/reflector.go:156: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 193; INTERNAL_ERROR") has prevented the request from succeeding\nW0916 02:37:13.592045       1 reflector.go:424] k8s.io/client-go@v0.19.0-rc.3/tools/cache/reflector.go:156: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 11; INTERNAL_ERROR") has prevented the request from succeeding\nW0916 02:37:13.592112       1 reflector.go:424] k8s.io/client-go@v0.19.0-rc.3/tools/cache/reflector.go:156: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 51; INTERNAL_ERROR") has prevented the request from succeeding\nW0916 02:37:13.592156       1 reflector.go:424] k8s.io/client-go@v0.19.0-rc.3/tools/cache/reflector.go:156: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 23; INTERNAL_ERROR") has prevented the request from succeeding\nW0916 02:37:13.592204       1 reflector.go:424] k8s.io/client-go@v0.19.0-rc.3/tools/cache/reflector.go:156: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 205; INTERNAL_ERROR") has prevented the request from succeeding\nW0916 02:37:13.592260       1 reflector.go:424] k8s.io/client-go@v0.19.0-rc.3/tools/cache/reflector.go:156: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 213; INTERNAL_ERROR") has prevented the request from succeeding\n
Sep 16 02:40:05.275 E ns/openshift-monitoring pod/grafana-75869fd4c6-nlnhd node/ip-10-0-167-164.ec2.internal container/grafana-proxy container exited with code 2 (Error): 
Sep 16 02:40:10.345 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-167-164.ec2.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-09-16T02:40:07.345Z caller=main.go:285 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 16 02:40:18.927 E ns/openshift-monitoring pod/node-exporter-vbjcx node/ip-10-0-155-169.ec2.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-09-16T02:03:16.718Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-09-16T02:03:16.719Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-09-16T02:03:16.719Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-09-16T02:03:16.719Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-09-16T02:03:16.719Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-09-16T02:03:16.719Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Sep 16 02:40:28.963 E ns/openshift-monitoring pod/thanos-querier-bdb65fc58-rltxk node/ip-10-0-226-214.ec2.internal container/oauth-proxy container exited with code 2 (Error):  oauthproxy.go:785: basicauth: 10.129.0.9:41594 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:15:15 oauthproxy.go:785: basicauth: 10.129.0.9:48322 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:16:15 oauthproxy.go:785: basicauth: 10.129.0.9:52320 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:17:15 oauthproxy.go:785: basicauth: 10.129.0.9:55640 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:18:15 oauthproxy.go:785: basicauth: 10.129.0.9:60868 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:23:46 oauthproxy.go:785: basicauth: 10.130.0.51:60142 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:25:47 oauthproxy.go:785: basicauth: 10.130.0.51:45330 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:26:46 oauthproxy.go:785: basicauth: 10.130.0.51:48530 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:27:46 oauthproxy.go:785: basicauth: 10.130.0.51:51668 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:31:46 oauthproxy.go:785: basicauth: 10.130.0.51:36172 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:33:46 oauthproxy.go:785: basicauth: 10.130.0.51:43662 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:34:46 oauthproxy.go:785: basicauth: 10.130.0.51:47120 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:36:46 oauthproxy.go:785: basicauth: 10.130.0.51:53390 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:37:46 oauthproxy.go:785: basicauth: 10.130.0.51:56368 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 16 02:40:35.776 E ns/openshift-console pod/console-5bbdfb68f7-7cxfw node/ip-10-0-225-82.ec2.internal container/console container exited with code 2 (Error): 2020-09-16T02:12:43Z cmd/main: Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\n2020-09-16T02:12:43Z cmd/main: cookies are secure!\n2020-09-16T02:12:43Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-16T02:12:53Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-16T02:13:03Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-16T02:13:14Z cmd/main: Binding to [::]:8443...\n2020-09-16T02:13:14Z cmd/main: using TLS\n
Sep 16 02:40:36.013 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-226-214.ec2.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-09-16T02:40:33.240Z caller=main.go:285 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 16 02:42:03.265 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-75cd9d968b-8nmgg node/ip-10-0-155-169.ec2.internal container/csi-attacher container exited with code 2 (Error): 
Sep 16 02:42:03.265 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-75cd9d968b-8nmgg node/ip-10-0-155-169.ec2.internal container/csi-provisioner container exited with code 2 (Error): 
Sep 16 02:42:03.265 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-75cd9d968b-8nmgg node/ip-10-0-155-169.ec2.internal container/csi-driver container exited with code 2 (Error): 
Sep 16 02:42:03.265 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-75cd9d968b-8nmgg node/ip-10-0-155-169.ec2.internal container/csi-resizer container exited with code 255 (Error): Lost connection to CSI driver, exiting
Sep 16 02:42:03.265 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-75cd9d968b-8nmgg node/ip-10-0-155-169.ec2.internal container/csi-snapshotter container exited with code 255 (Error): Lost connection to CSI driver, exiting
Sep 16 02:42:14.327 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-2x27j node/ip-10-0-226-214.ec2.internal container/csi-liveness-probe container exited with code 2 (Error): 
Sep 16 02:42:14.327 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-2x27j node/ip-10-0-226-214.ec2.internal container/csi-driver container exited with code 2 (Error): 
Sep 16 02:42:20.718 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-rnnbg node/ip-10-0-190-136.ec2.internal container/csi-liveness-probe container exited with code 2 (Error): 
Sep 16 02:42:20.718 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-rnnbg node/ip-10-0-190-136.ec2.internal container/csi-driver container exited with code 2 (Error): 
Sep 16 02:42:45.412 E ns/openshift-cloud-credential-operator pod/pod-identity-webhook-675b48dd94-f9kqx node/ip-10-0-155-169.ec2.internal container/pod-identity-webhook container exited with code 137 (Error): 
Sep 16 02:44:26.186 E ns/openshift-sdn pod/ovs-bkhnd node/ip-10-0-190-136.ec2.internal container/openvswitch container exited with code 1 (Error): 39:42.314Z|00543|connmgr|INFO|br0<->unix#1361: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:39:42.353Z|00544|bridge|INFO|bridge br0: deleted interface vethc9e537f3 on port 47\n2020-09-16T02:39:48.454Z|00545|connmgr|INFO|br0<->unix#1364: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:39:48.482Z|00546|connmgr|INFO|br0<->unix#1367: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:39:48.519Z|00547|bridge|INFO|bridge br0: deleted interface vethecafd71e on port 59\n2020-09-16T02:39:49.503Z|00548|bridge|INFO|bridge br0: added interface veth709d766c on port 93\n2020-09-16T02:39:49.697Z|00549|connmgr|INFO|br0<->unix#1370: 5 flow_mods in the last 0 s (5 adds)\n2020-09-16T02:39:49.839Z|00550|connmgr|INFO|br0<->unix#1373: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:39:49.974Z|00551|bridge|INFO|bridge br0: added interface vethd8789736 on port 94\n2020-09-16T02:39:50.100Z|00552|connmgr|INFO|br0<->unix#1376: 5 flow_mods in the last 0 s (5 adds)\n2020-09-16T02:39:50.175Z|00553|connmgr|INFO|br0<->unix#1379: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:40:40.159Z|00554|connmgr|INFO|br0<->unix#1389: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:40:40.185Z|00555|connmgr|INFO|br0<->unix#1392: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:40:40.208Z|00556|bridge|INFO|bridge br0: deleted interface veth8b5b2e58 on port 17\n2020-09-16T02:40:40.243Z|00557|connmgr|INFO|br0<->unix#1395: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:40:40.289Z|00558|connmgr|INFO|br0<->unix#1398: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:40:40.324Z|00559|bridge|INFO|bridge br0: deleted interface vethf4005905 on port 61\n2020-09-16 02:44:25 info: Saving flows ...\n2020-09-16T02:44:25Z|00001|vconn|WARN|unix:/var/run/openvswitch/br0.mgmt: version negotiation failed (we support version 0x01, peer supports version 0x04)\novs-ofctl: br0: failed to connect to socket (Broken pipe)\n2020-09-16 02:44:25 info: Saved flows\nrm: cannot remove '/var/run/openvswitch/ovsdb-server.pid': No such file or directory\n
Sep 16 02:44:46.894 E ns/openshift-sdn pod/ovs-w45b7 node/ip-10-0-155-169.ec2.internal container/openvswitch container exited with code 1 (Error): 13e6c on port 16\n2020-09-16T02:40:47.644Z|01473|bridge|INFO|bridge br0: added interface veth86df9da4 on port 66\n2020-09-16T02:40:47.684Z|01474|connmgr|INFO|br0<->unix#1083: 5 flow_mods in the last 0 s (5 adds)\n2020-09-16T02:40:47.719Z|01475|connmgr|INFO|br0<->unix#1086: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:40:52.282Z|01476|connmgr|INFO|br0<->unix#1089: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:40:52.409Z|00017|jsonrpc|WARN|unix#1314: send error: Broken pipe\n2020-09-16T02:40:52.409Z|00018|reconnect|WARN|unix#1314: connection dropped (Broken pipe)\n2020-09-16T02:40:52.386Z|01477|connmgr|INFO|br0<->unix#1092: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:40:52.439Z|01478|bridge|INFO|bridge br0: deleted interface veth3d6054f6 on port 3\n2020-09-16T02:42:11.232Z|01479|bridge|INFO|bridge br0: added interface vethe5de332e on port 67\n2020-09-16T02:42:11.265Z|01480|connmgr|INFO|br0<->unix#1105: 5 flow_mods in the last 0 s (5 adds)\n2020-09-16T02:42:11.310Z|01481|connmgr|INFO|br0<->unix#1108: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:42:44.667Z|01482|connmgr|INFO|br0<->unix#1114: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:42:44.695Z|01483|connmgr|INFO|br0<->unix#1117: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:42:44.718Z|01484|bridge|INFO|bridge br0: deleted interface veth3d117b61 on port 47\n2020-09-16T02:44:43.641Z|01485|connmgr|INFO|br0<->unix#1133: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:44:43.670Z|01486|connmgr|INFO|br0<->unix#1136: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:44:43.690Z|01487|bridge|INFO|bridge br0: deleted interface vethb1db1c81 on port 9\n2020-09-16 02:44:46 info: Saving flows ...\n2020-09-16T02:44:46Z|00001|vconn|WARN|unix:/var/run/openvswitch/br0.mgmt: version negotiation failed (we support version 0x01, peer supports version 0x04)\novs-ofctl: br0: failed to connect to socket (Broken pipe)\n2020-09-16 02:44:46 info: Saved flows\nrm: cannot remove '/var/run/openvswitch/ovsdb-server.pid': No such file or directory\n
Sep 16 02:44:54.970 E ns/openshift-multus pod/multus-s5gdx node/ip-10-0-174-42.ec2.internal container/kube-multus container exited with code 137 (Error): 
Sep 16 02:45:33.274 E ns/openshift-sdn pod/ovs-wb6qj node/ip-10-0-167-164.ec2.internal container/openvswitch container exited with code 1 (Error): 2020-09-16T02:40:03.989Z|01395|connmgr|INFO|br0<->unix#390: 5 flow_mods in the last 0 s (5 adds)\n2020-09-16T02:40:04.031Z|01396|connmgr|INFO|br0<->unix#393: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:40:04.070Z|01397|bridge|INFO|bridge br0: added interface veth828aed0c on port 19\n2020-09-16T02:40:04.123Z|01398|connmgr|INFO|br0<->unix#396: 5 flow_mods in the last 0 s (5 adds)\n2020-09-16T02:40:04.173Z|01399|connmgr|INFO|br0<->unix#399: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:40:04.362Z|01400|connmgr|INFO|br0<->unix#402: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:40:04.393Z|01401|connmgr|INFO|br0<->unix#405: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:40:04.427Z|01402|bridge|INFO|bridge br0: deleted interface vethac81018b on port 6\n2020-09-16T02:40:31.705Z|01403|connmgr|INFO|br0<->unix#411: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:40:31.736Z|01404|connmgr|INFO|br0<->unix#414: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:40:31.759Z|01405|bridge|INFO|bridge br0: deleted interface vethbf016f94 on port 7\n2020-09-16T02:44:24.737Z|01406|connmgr|INFO|br0<->unix#443: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:44:24.776Z|01407|connmgr|INFO|br0<->unix#446: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:44:24.819Z|01408|bridge|INFO|bridge br0: deleted interface veth6a11b182 on port 4\n2020-09-16T02:44:33.218Z|01409|bridge|INFO|bridge br0: added interface veth77165f80 on port 20\n2020-09-16T02:44:33.252Z|01410|connmgr|INFO|br0<->unix#449: 5 flow_mods in the last 0 s (5 adds)\n2020-09-16T02:44:33.290Z|01411|connmgr|INFO|br0<->unix#452: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16 02:45:32 info: Saving flows ...\n2020-09-16T02:45:32Z|00001|vconn|WARN|unix:/var/run/openvswitch/br0.mgmt: version negotiation failed (we support version 0x01, peer supports version 0x04)\novs-ofctl: br0: failed to connect to socket (Broken pipe)\n2020-09-16 02:45:32 info: Saved flows\nrm: cannot remove '/var/run/openvswitch/ovsdb-server.pid': No such file or directory\n
Sep 16 02:45:35.959 E ns/openshift-multus pod/multus-ndlf7 node/ip-10-0-226-214.ec2.internal container/kube-multus container exited with code 137 (Error): 
Sep 16 02:46:03.086 E ns/openshift-sdn pod/ovs-qnvbk node/ip-10-0-226-214.ec2.internal container/openvswitch container exited with code 1 (Error): 1 deletes)\n2020-09-16T02:44:40.022Z|01399|connmgr|INFO|br0<->unix#636: 1 flow_mods in the last 0 s (1 deletes)\n2020-09-16T02:44:40.184Z|01400|connmgr|INFO|br0<->unix#640: 3 flow_mods in the last 0 s (3 adds)\n2020-09-16T02:44:40.231Z|01401|connmgr|INFO|br0<->unix#643: 1 flow_mods in the last 0 s (1 adds)\n2020-09-16T02:44:40.283Z|01402|connmgr|INFO|br0<->unix#646: 3 flow_mods in the last 0 s (3 adds)\n2020-09-16T02:44:40.313Z|01403|connmgr|INFO|br0<->unix#649: 1 flow_mods in the last 0 s (1 adds)\n2020-09-16T02:44:40.340Z|01404|connmgr|INFO|br0<->unix#652: 3 flow_mods in the last 0 s (3 adds)\n2020-09-16T02:44:40.362Z|01405|connmgr|INFO|br0<->unix#655: 1 flow_mods in the last 0 s (1 adds)\n2020-09-16T02:44:40.385Z|01406|connmgr|INFO|br0<->unix#658: 3 flow_mods in the last 0 s (3 adds)\n2020-09-16T02:44:40.411Z|01407|connmgr|INFO|br0<->unix#661: 1 flow_mods in the last 0 s (1 adds)\n2020-09-16T02:44:40.440Z|01408|connmgr|INFO|br0<->unix#664: 3 flow_mods in the last 0 s (3 adds)\n2020-09-16T02:44:40.467Z|01409|connmgr|INFO|br0<->unix#667: 1 flow_mods in the last 0 s (1 adds)\n2020-09-16T02:44:52.281Z|01410|connmgr|INFO|br0<->unix#670: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:44:52.310Z|01411|connmgr|INFO|br0<->unix#673: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:44:52.338Z|01412|bridge|INFO|bridge br0: deleted interface veth05f05a1c on port 3\n2020-09-16T02:44:54.525Z|01413|bridge|INFO|bridge br0: added interface vethc0e7bfff on port 36\n2020-09-16T02:44:54.562Z|01414|connmgr|INFO|br0<->unix#676: 5 flow_mods in the last 0 s (5 adds)\n2020-09-16T02:44:54.604Z|01415|connmgr|INFO|br0<->unix#679: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16 02:46:02 info: Saving flows ...\n2020-09-16T02:46:02Z|00001|vconn|WARN|unix:/var/run/openvswitch/br0.mgmt: version negotiation failed (we support version 0x01, peer supports version 0x04)\novs-ofctl: br0: failed to connect to socket (Broken pipe)\n2020-09-16 02:46:02 info: Saved flows\nrm: cannot remove '/var/run/openvswitch/ovsdb-server.pid': No such file or directory\n
Sep 16 02:46:21.061 E ns/openshift-multus pod/multus-h9z9q node/ip-10-0-225-82.ec2.internal container/kube-multus container exited with code 137 (Error): 
Sep 16 02:46:23.069 E ns/openshift-sdn pod/ovs-hb9qq node/ip-10-0-225-82.ec2.internal container/openvswitch container exited with code 1 (Error): 09-16T02:44:56.560Z|01725|connmgr|INFO|br0<->unix#1303: 1 flow_mods in the last 0 s (1 adds)\n2020-09-16T02:44:56.590Z|01726|connmgr|INFO|br0<->unix#1306: 3 flow_mods in the last 0 s (3 adds)\n2020-09-16T02:44:56.626Z|01727|connmgr|INFO|br0<->unix#1309: 1 flow_mods in the last 0 s (1 adds)\n2020-09-16T02:44:56.660Z|01728|connmgr|INFO|br0<->unix#1312: 3 flow_mods in the last 0 s (3 adds)\n2020-09-16T02:44:56.686Z|01729|connmgr|INFO|br0<->unix#1315: 1 flow_mods in the last 0 s (1 adds)\n2020-09-16T02:45:08.340Z|01730|connmgr|INFO|br0<->unix#1318: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:45:08.365Z|01731|connmgr|INFO|br0<->unix#1321: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:45:08.391Z|01732|bridge|INFO|bridge br0: deleted interface veth9674779a on port 3\n2020-09-16T02:45:15.241Z|01733|bridge|INFO|bridge br0: added interface veth970883a4 on port 82\n2020-09-16T02:45:15.272Z|01734|connmgr|INFO|br0<->unix#1324: 5 flow_mods in the last 0 s (5 adds)\n2020-09-16T02:45:15.308Z|01735|connmgr|INFO|br0<->unix#1327: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:45:32.889Z|01736|connmgr|INFO|br0<->unix#1333: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:45:32.925Z|01737|connmgr|INFO|br0<->unix#1336: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:45:32.948Z|01738|bridge|INFO|bridge br0: deleted interface veth61f53df8 on port 5\n2020-09-16T02:45:35.354Z|01739|bridge|INFO|bridge br0: added interface veth03aa9c03 on port 83\n2020-09-16T02:45:35.388Z|01740|connmgr|INFO|br0<->unix#1339: 5 flow_mods in the last 0 s (5 adds)\n2020-09-16T02:45:35.424Z|01741|connmgr|INFO|br0<->unix#1342: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16 02:46:21 info: Saving flows ...\n2020-09-16T02:46:22Z|00001|vconn|WARN|unix:/var/run/openvswitch/br0.mgmt: version negotiation failed (we support version 0x01, peer supports version 0x04)\novs-ofctl: br0: failed to connect to socket (Broken pipe)\n2020-09-16 02:46:22 info: Saved flows\nrm: cannot remove '/var/run/openvswitch/ovsdb-server.pid': No such file or directory\n
Sep 16 02:46:33.129 E ns/openshift-sdn pod/sdn-jpcvk node/ip-10-0-225-82.ec2.internal container/sdn container exited with code 255 (Error): 7] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.82:8443 10.129.0.95:8443 10.130.0.3:8443]\nI0916 02:45:39.951877  100265 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.82:8443 10.129.0.95:8443]\nI0916 02:45:39.951905  100265 roundrobin.go:217] Delete endpoint 10.130.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0916 02:45:39.951918  100265 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.82:6443 10.129.0.95:6443]\nI0916 02:45:39.951923  100265 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0916 02:45:40.065380  100265 proxier.go:370] userspace proxy: processing 0 service events\nI0916 02:45:40.066597  100265 proxier.go:349] userspace syncProxyRules took 36.057326ms\nI0916 02:45:40.207449  100265 proxier.go:370] userspace proxy: processing 0 service events\nI0916 02:45:40.208619  100265 proxier.go:349] userspace syncProxyRules took 34.706499ms\nI0916 02:46:23.340591  100265 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.82:8443 10.129.0.95:8443 10.130.0.68:8443]\nI0916 02:46:23.340627  100265 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.82:6443 10.129.0.95:6443 10.130.0.68:6443]\nI0916 02:46:23.517036  100265 proxier.go:370] userspace proxy: processing 0 service events\nI0916 02:46:23.519201  100265 proxier.go:349] userspace syncProxyRules took 38.134153ms\nI0916 02:46:26.293777  100265 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0916 02:46:32.065769  100265 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 16 02:47:05.972 E ns/openshift-multus pod/multus-tph6c node/ip-10-0-190-136.ec2.internal container/kube-multus container exited with code 137 (Error): 
Sep 16 02:47:48.580 E ns/openshift-multus pod/multus-8wnh2 node/ip-10-0-155-169.ec2.internal container/kube-multus container exited with code 137 (Error): 
Sep 16 02:48:30.755 E ns/openshift-multus pod/multus-5trpf node/ip-10-0-167-164.ec2.internal container/kube-multus container exited with code 137 (Error): 
Sep 16 02:49:06.383 E ns/openshift-machine-config-operator pod/machine-config-operator-59785fd5cc-7sz57 node/ip-10-0-190-136.ec2.internal container/machine-config-operator container exited with code 2 (Error): ns.k8s.io/v1 CustomResourceDefinition\nW0916 02:39:31.537007       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0916 02:39:38.700884       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0916 02:41:11.589993       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0916 02:41:18.142890       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0916 02:44:23.815184       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0916 02:44:30.440538       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0916 02:44:37.323762       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0916 02:46:33.656020       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0916 02:47:33.874513       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0916 02:49:01.283250       1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\n
Sep 16 02:51:30.201 E ns/openshift-machine-config-operator pod/machine-config-daemon-56jms node/ip-10-0-167-164.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Sep 16 02:51:33.887 E ns/openshift-machine-config-operator pod/machine-config-daemon-fn95z node/ip-10-0-190-136.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Sep 16 02:51:38.917 E ns/openshift-machine-config-operator pod/machine-config-daemon-2lkw6 node/ip-10-0-226-214.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Sep 16 02:51:43.916 E ns/openshift-machine-config-operator pod/machine-config-daemon-sx6c6 node/ip-10-0-174-42.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Sep 16 02:54:01.158 E ns/openshift-machine-config-operator pod/machine-config-server-464vr node/ip-10-0-155-169.ec2.internal container/machine-config-server container exited with code 2 (Error): I0916 01:57:47.834658       1 start.go:38] Version: machine-config-daemon-4.6.0-202006240615.p0-270-g0b1b2d5b-dirty (0b1b2d5b10751e41af79d2d75705ca03589a1f7e)\nI0916 01:57:47.835181       1 api.go:71] Launching server on :22624\nI0916 01:57:47.835199       1 api.go:71] Launching server on :22623\nI0916 02:04:25.019573       1 api.go:119] Pool worker requested by address:"10.0.224.182:8052" User-Agent:"Ignition/2.6.0" Accept-Header: "application/vnd.coreos.ignition+json;version=3.1.0, */*;q=0.1"\n
Sep 16 02:54:09.644 E ns/openshift-kube-storage-version-migrator pod/migrator-5cfc9b668f-6n49x node/ip-10-0-226-214.ec2.internal container/migrator container exited with code 2 (Error): I0916 02:39:08.596509       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0916 02:39:08.596682       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0916 02:39:08.596691       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0916 02:39:08.596701       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0916 02:39:08.596710       1 migrator.go:18] FLAG: --kubeconfig=""\nI0916 02:39:08.596718       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0916 02:39:08.596727       1 migrator.go:18] FLAG: --log_dir=""\nI0916 02:39:08.596734       1 migrator.go:18] FLAG: --log_file=""\nI0916 02:39:08.596741       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0916 02:39:08.596748       1 migrator.go:18] FLAG: --logtostderr="true"\nI0916 02:39:08.596755       1 migrator.go:18] FLAG: --skip_headers="false"\nI0916 02:39:08.596762       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0916 02:39:08.596768       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0916 02:39:08.596776       1 migrator.go:18] FLAG: --v="2"\nI0916 02:39:08.596783       1 migrator.go:18] FLAG: --vmodule=""\nI0916 02:39:08.599230       1 reflector.go:175] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\n
Sep 16 02:54:09.705 E ns/openshift-monitoring pod/grafana-84c4f478db-mlkfv node/ip-10-0-226-214.ec2.internal container/grafana-proxy container exited with code 2 (Error): 
Sep 16 02:54:10.586 E ns/openshift-monitoring pod/prometheus-adapter-589d644b8d-qcwhn node/ip-10-0-226-214.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0916 02:39:53.107650       1 adapter.go:94] successfully using in-cluster auth\nI0916 02:39:53.951396       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0916 02:39:53.951445       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0916 02:39:53.952061       1 secure_serving.go:178] Serving securely on [::]:6443\nI0916 02:39:53.952716       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0916 02:39:53.952754       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Sep 16 02:54:10.849 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-226-214.ec2.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/16 02:40:34 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 16 02:54:10.849 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-226-214.ec2.internal container/prometheus-proxy container exited with code 2 (Error): 2020/09/16 02:40:34 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/16 02:40:34 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/16 02:40:34 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/16 02:40:34 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/16 02:40:34 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/09/16 02:40:34 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/16 02:40:34 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/09/16 02:40:34 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/16 02:40:34 http.go:107: HTTPS: listening on [::]:9091\nI0916 02:40:34.963130       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/16 02:44:09 oauthproxy.go:785: basicauth: 10.131.0.28:58480 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:48:40 oauthproxy.go:785: basicauth: 10.131.0.28:35222 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:53:10 oauthproxy.go:785: basicauth: 10.131.0.28:40094 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 16 02:54:10.849 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-226-214.ec2.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-16T02:40:34.211009502Z caller=main.go:87 msg="Starting prometheus-config-reloader version 'rhel-8-golang-openshift-4.6'."\nlevel=error ts=2020-09-16T02:40:34.212716179Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post \"http://localhost:9090/-/reload\": dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-16T02:40:39.508938742Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-16T02:40:39.509034537Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Sep 16 02:54:39.412 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-174-42.ec2.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-09-16T02:54:35.261Z caller=main.go:285 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 16 02:54:53.691 E ns/e2e-k8s-service-lb-available-1152 pod/service-test-ghb5b node/ip-10-0-226-214.ec2.internal container/netexec container exited with code 2 (Error): 
Sep 16 02:56:02.979 E clusteroperator/dns changed Degraded to True: DNSDegraded: DNS default is degraded
Sep 16 02:56:16.035 E clusteroperator/authentication changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver
Sep 16 02:56:36.981 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Sep 16 02:56:55.435 E ns/openshift-sdn pod/ovs-9bwpk node/ip-10-0-190-136.ec2.internal container/openvswitch container exited with code 1 (Error): 6T02:54:17.364Z|00157|connmgr|INFO|br0<->unix#307: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:54:17.423Z|00158|connmgr|INFO|br0<->unix#310: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:54:17.477Z|00159|bridge|INFO|bridge br0: deleted interface vethd8789736 on port 94\n2020-09-16T02:54:17.554Z|00160|connmgr|INFO|br0<->unix#313: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:54:17.609Z|00161|connmgr|INFO|br0<->unix#316: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:54:17.646Z|00162|bridge|INFO|bridge br0: deleted interface veth871c6d3d on port 84\n2020-09-16T02:54:28.052Z|00163|bridge|INFO|bridge br0: added interface veth4d2986ad on port 98\n2020-09-16T02:54:28.111Z|00164|connmgr|INFO|br0<->unix#322: 5 flow_mods in the last 0 s (5 adds)\n2020-09-16T02:54:28.159Z|00165|connmgr|INFO|br0<->unix#326: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:54:28.161Z|00166|connmgr|INFO|br0<->unix#328: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-09-16T02:54:33.263Z|00167|connmgr|INFO|br0<->unix#331: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:54:33.292Z|00168|connmgr|INFO|br0<->unix#334: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:54:33.325Z|00169|bridge|INFO|bridge br0: deleted interface veth7267b6b9 on port 91\n2020-09-16T02:54:33.477Z|00170|connmgr|INFO|br0<->unix#337: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:54:33.515Z|00171|connmgr|INFO|br0<->unix#340: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:54:33.555Z|00172|bridge|INFO|bridge br0: deleted interface veth4d2986ad on port 98\n info: Saving flows ...\n2020-09-16 02:55:05 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nrm: cannot remove '/var/run/openvswitch/ovs-vswitchd.pid': No such file or directory\nFailed to connect to bus: No data available\nopenvswitch is running in container\novsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process), aborting\nStarting ovsdb-server ... failed!\n
Sep 16 02:57:01.415 E ns/openshift-sdn pod/sdn-j76n2 node/ip-10-0-190-136.ec2.internal container/kube-rbac-proxy container exited with code 1 (Error): ed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  2501  100  2501    0     0   143k      0 --:--:-- --:--:-- --:--:--  143k\nI0916 02:45:19.922713  100088 main.go:188] Valid token audiences: \nI0916 02:45:19.922786  100088 main.go:261] Reading certificate files\nI0916 02:45:19.922981  100088 main.go:294] Starting TCP socket on :9101\nI0916 02:45:19.923417  100088 main.go:301] Listening securely on :9101\nI0916 02:55:04.690243  100088 main.go:356] received interrupt, shutting down\nE0916 02:55:04.690484  100088 main.go:309] failed to gracefully close secure listener: close tcp [::]:9101: use of closed network connection\n  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0curl: (7) Failed to connect to 172.30.0.1 port 443: No route to host\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\n  File "/usr/lib64/python3.6/json/__init__.py", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads\n    return _default_decoder.decode(s)\n  File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode\n    raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n
Sep 16 02:57:04.018 E ns/openshift-sdn pod/ovs-lf6km node/ip-10-0-226-214.ec2.internal container/openvswitch container exited with code 1 (Error): last 0 s (4 deletes)\n2020-09-16T02:54:09.687Z|00085|bridge|INFO|bridge br0: deleted interface veth6e35e6a9 on port 32\n2020-09-16T02:54:09.771Z|00086|connmgr|INFO|br0<->unix#130: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:54:09.820Z|00087|connmgr|INFO|br0<->unix#133: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:54:09.857Z|00088|bridge|INFO|bridge br0: deleted interface vetha29d6318 on port 34\n2020-09-16T02:54:09.926Z|00089|connmgr|INFO|br0<->unix#136: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:54:09.980Z|00090|connmgr|INFO|br0<->unix#139: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:54:10.021Z|00091|bridge|INFO|bridge br0: deleted interface veth8912ee9b on port 35\n2020-09-16T02:54:10.087Z|00092|connmgr|INFO|br0<->unix#142: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:54:10.141Z|00093|connmgr|INFO|br0<->unix#145: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:54:10.179Z|00094|bridge|INFO|bridge br0: deleted interface veth539058bb on port 27\n2020-09-16T02:54:10.237Z|00095|connmgr|INFO|br0<->unix#148: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:54:10.284Z|00096|connmgr|INFO|br0<->unix#151: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:54:10.321Z|00097|bridge|INFO|bridge br0: deleted interface vethd081063e on port 14\n2020-09-16T02:54:52.869Z|00098|connmgr|INFO|br0<->unix#161: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:54:52.898Z|00099|connmgr|INFO|br0<->unix#164: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:54:52.922Z|00100|bridge|INFO|bridge br0: deleted interface veth206eac45 on port 13\n2020-09-16 02:55:30 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nrm: cannot remove '/var/run/openvswitch/ovs-vswitchd.pid': No such file or directory\nFailed to connect to bus: No data available\nopenvswitch is running in container\novsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process), aborting\nStarting ovsdb-server ... failed!\n
Sep 16 02:57:05.983 E ns/openshift-sdn pod/sdn-pdpk6 node/ip-10-0-226-214.ec2.internal container/sdn container exited with code 255 (Error): ols/cache/reflector.go:125\nI0916 02:55:30.394043   97100 reflector.go:181] Stopping reflector *v1.HostSubnet (30m0s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 02:55:30.394107   97100 reflector.go:181] Stopping reflector *v1.Service (30s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 02:55:30.394168   97100 reflector.go:181] Stopping reflector *v1.Namespace (30s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 02:55:30.394227   97100 reflector.go:181] Stopping reflector *v1.NetworkPolicy (30s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 02:55:30.394289   97100 reflector.go:181] Stopping reflector *v1.EgressNetworkPolicy (30m0s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 02:57:03.531136    2246 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0916 02:57:03.550030    2246 feature_gate.go:243] feature gates: &{map[]}\nI0916 02:57:03.550102    2246 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0916 02:57:03.550150    2246 cmd.go:216] Watching config file /config/..2020_09_16_02_44_38.059115118/kube-proxy-config.yaml for changes\nI0916 02:57:03.666702    2246 node.go:150] Initializing SDN node "ip-10-0-226-214.ec2.internal" (10.0.226.214) of type "redhat/openshift-ovs-networkpolicy"\nI0916 02:57:03.700634    2246 cmd.go:159] Starting node networking (v0.0.0-alpha.0-203-g2e41615)\nI0916 02:57:03.700667    2246 node.go:338] Starting openshift-sdn network plugin\nI0916 02:57:03.936453    2246 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0916 02:57:04.049674    2246 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0916 02:57:04.554960    2246 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0916 02:57:05.747102    2246 cmd.go:111] Failed to start sdn: node SDN setup failed: Link not found\n
Sep 16 02:57:11.036 E ns/openshift-sdn pod/sdn-pdpk6 node/ip-10-0-226-214.ec2.internal container/kube-rbac-proxy container exited with code 1 (Error):   0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0curl: (7) Failed to connect to 172.30.0.1 port 443: No route to host\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\n  File "/usr/lib64/python3.6/json/__init__.py", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads\n    return _default_decoder.decode(s)\n  File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode\n    raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n
Sep 16 02:57:22.398 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-167-164.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/09/16 02:40:08 Watching directory: "/etc/alertmanager/config"\n
Sep 16 02:57:22.398 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-167-164.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/09/16 02:40:08 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/16 02:40:08 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/16 02:40:08 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/16 02:40:08 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/16 02:40:08 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/09/16 02:40:08 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/16 02:40:08 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/09/16 02:40:08 http.go:107: HTTPS: listening on [::]:9095\nI0916 02:40:08.438265       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 16 02:57:24.036 E ns/openshift-machine-config-operator pod/machine-config-controller-857f76587d-kx6j6 node/ip-10-0-155-169.ec2.internal container/machine-config-controller container exited with code 2 (Error): r: node ip-10-0-226-214.ec2.internal: Completed update to rendered-worker-5942b35c57925a34089611820aca75a1\nI0916 02:57:13.030008       1 node_controller.go:419] Pool worker: node ip-10-0-226-214.ec2.internal: Reporting ready\nE0916 02:57:14.400798       1 render_controller.go:459] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again\nI0916 02:57:14.400830       1 render_controller.go:376] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again\nI0916 02:57:17.932732       1 node_controller.go:414] Pool worker: 2 candidate nodes for update, capacity: 1\nI0916 02:57:17.932785       1 node_controller.go:414] Pool worker: Setting node ip-10-0-167-164.ec2.internal target to rendered-worker-5942b35c57925a34089611820aca75a1\nI0916 02:57:17.963728       1 node_controller.go:419] Pool worker: node ip-10-0-167-164.ec2.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-5942b35c57925a34089611820aca75a1\nI0916 02:57:17.975486       1 event.go:282] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"worker", UID:"037bea07-5ba7-4f9a-b52e-d7b0fe0569ab", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"72039", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-167-164.ec2.internal to config rendered-worker-5942b35c57925a34089611820aca75a1\nI0916 02:57:18.990551       1 node_controller.go:419] Pool worker: node ip-10-0-167-164.ec2.internal: changed annotation machineconfiguration.openshift.io/state = Working\nI0916 02:57:19.064348       1 node_controller.go:419] Pool worker: node ip-10-0-167-164.ec2.internal: Reporting unready: node ip-10-0-167-164.ec2.internal is reporting Unschedulable\n
Sep 16 02:57:43.205 E ns/openshift-cloud-credential-operator pod/pod-identity-webhook-574dd58c7c-r9zcq node/ip-10-0-155-169.ec2.internal container/pod-identity-webhook container exited with code 137 (Error): 
Sep 16 02:57:50.358 E ns/e2e-k8s-sig-apps-job-upgrade-2890 pod/foo-kf58v node/ip-10-0-167-164.ec2.internal container/c container exited with code 137 (Error): 
Sep 16 02:57:50.364 E ns/e2e-k8s-sig-apps-job-upgrade-2890 pod/foo-ht96x node/ip-10-0-167-164.ec2.internal container/c container exited with code 137 (Error): 
Sep 16 02:58:14.599 E kube-apiserver Kube API started failing: Get "https://api.ci-op-dytt3k0l-21e62.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s": dial tcp 54.152.157.222:6443: connect: connection refused
Sep 16 02:59:22.930 E ns/openshift-sdn pod/ovs-gfpsz node/ip-10-0-155-169.ec2.internal container/openvswitch container exited with code 1 (Error): 95|bridge|INFO|bridge br0: deleted interface vethe5de332e on port 67\n2020-09-16T02:57:42.673Z|00015|jsonrpc|WARN|unix#410: receive error: Connection reset by peer\n2020-09-16T02:57:42.673Z|00016|reconnect|WARN|unix#410: connection dropped (Connection reset by peer)\n2020-09-16T02:57:45.302Z|00196|connmgr|INFO|br0<->unix#441: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:57:45.373Z|00197|connmgr|INFO|br0<->unix#444: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:57:45.436Z|00198|bridge|INFO|bridge br0: deleted interface vethcb852a06 on port 83\n2020-09-16T02:57:45.526Z|00199|connmgr|INFO|br0<->unix#447: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:57:45.563Z|00200|connmgr|INFO|br0<->unix#450: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:57:45.632Z|00201|bridge|INFO|bridge br0: deleted interface veth8c1ab158 on port 82\n2020-09-16T02:57:47.290Z|00202|connmgr|INFO|br0<->unix#453: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:57:47.323Z|00203|connmgr|INFO|br0<->unix#456: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:57:47.350Z|00204|bridge|INFO|bridge br0: deleted interface veth17245b4a on port 84\n2020-09-16T02:57:48.283Z|00205|connmgr|INFO|br0<->unix#459: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:57:48.314Z|00206|connmgr|INFO|br0<->unix#462: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:57:48.343Z|00207|bridge|INFO|bridge br0: deleted interface vethb516ed89 on port 85\n2020-09-16 02:58:14 info: Saving flows ...\n2020-09-16T02:58:14Z|00001|vconn|WARN|unix:/var/run/openvswitch/br0.mgmt: version negotiation failed (we support version 0x01, peer supports version 0x04)\novs-ofctl: br0: failed to connect to socket (Broken pipe)\n2020-09-16 02:58:14 info: Saved flows\nrm: cannot remove '/var/run/openvswitch/ovsdb-server.pid': No such file or directory\nFailed to connect to bus: No data available\nopenvswitch is running in container\novsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process), aborting\nStarting ovsdb-server ... failed!\n
Sep 16 02:59:25.169 E ns/openshift-sdn pod/sdn-gqp92 node/ip-10-0-155-169.ec2.internal container/sdn container exited with code 255 (Error): nt decoding: unexpected EOF\nI0916 02:58:14.291072  100241 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0916 02:58:14.291099  100241 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0916 02:58:14.291184  100241 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0916 02:58:14.291221  100241 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0916 02:58:14.291305  100241 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0916 02:58:14.305903  100241 reflector.go:404] k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125: watch of *v1.Pod ended with: very short watch: k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125: Unexpected watch close - watch lasted less than a second and no items received\ninterrupt: Gracefully shutting down ...\nI0916 02:58:14.350324  100241 reflector.go:181] Stopping reflector *v1.Pod (30s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 02:59:23.453043    2917 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0916 02:59:23.460662    2917 feature_gate.go:243] feature gates: &{map[]}\nI0916 02:59:23.460853    2917 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0916 02:59:23.460999    2917 cmd.go:216] Watching config file /config/..2020_09_16_02_45_27.688513935/kube-proxy-config.yaml for changes\nI0916 02:59:23.575154    2917 node.go:150] Initializing SDN node "ip-10-0-155-169.ec2.internal" (10.0.155.169) of type "redhat/openshift-ovs-networkpolicy"\nI0916 02:59:23.606362    2917 cmd.go:159] Starting node networking (v0.0.0-alpha.0-203-g2e41615)\nI0916 02:59:23.606386    2917 node.go:338] Starting openshift-sdn network plugin\nI0916 02:59:24.078289    2917 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nF0916 02:59:24.651458    2917 cmd.go:111] Failed to start sdn: node SDN setup failed: Link not found\n
Sep 16 02:59:30.125 E ns/openshift-sdn pod/sdn-gqp92 node/ip-10-0-155-169.ec2.internal container/kube-rbac-proxy container exited with code 1 (Error): curl: (7) Failed to connect to 172.30.0.1 port 443: No route to host\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\n  File "/usr/lib64/python3.6/json/__init__.py", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads\n    return _default_decoder.decode(s)\n  File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode\n    raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n
Sep 16 02:59:52.412 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-7f48f64c84-9tv64 node/ip-10-0-225-82.ec2.internal container/aws-ebs-csi-driver-operator container exited with code 1 (Error): 
Sep 16 02:59:54.781 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-64d44f465f-vz554 node/ip-10-0-225-82.ec2.internal container/csi-driver container exited with code 2 (Error): 
Sep 16 02:59:54.781 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-64d44f465f-vz554 node/ip-10-0-225-82.ec2.internal container/csi-provisioner container exited with code 255 (Error): Lost connection to CSI driver, exiting
Sep 16 02:59:54.781 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-64d44f465f-vz554 node/ip-10-0-225-82.ec2.internal container/csi-attacher container exited with code 255 (Error): Lost connection to CSI driver, exiting
Sep 16 02:59:54.781 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-64d44f465f-vz554 node/ip-10-0-225-82.ec2.internal container/csi-resizer container exited with code 255 (Error): Lost connection to CSI driver, exiting
Sep 16 02:59:54.781 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-64d44f465f-vz554 node/ip-10-0-225-82.ec2.internal container/csi-snapshotter container exited with code 255 (Error): Lost connection to CSI driver, exiting
Sep 16 02:59:56.412 E ns/openshift-console-operator pod/console-operator-5cd648ddf5-f2bcq node/ip-10-0-225-82.ec2.internal container/console-operator container exited with code 1 (Error): ternalversions/factory.go:101\nI0916 02:59:49.003048       1 reflector.go:213] Stopping reflector *v1.Console (10m0s) from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0916 02:59:49.003131       1 reflector.go:213] Stopping reflector *v1.Service (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0916 02:59:49.003198       1 reflector.go:213] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0916 02:59:49.003223       1 reflector.go:213] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0916 02:59:49.003236       1 reflector.go:213] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0916 02:59:49.003367       1 reflector.go:213] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0916 02:59:49.003411       1 reflector.go:213] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0916 02:59:49.003496       1 reflector.go:213] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0916 02:59:49.003548       1 reflector.go:213] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0916 02:59:49.003582       1 reflector.go:213] Stopping reflector *v1.Deployment (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0916 02:59:49.003638       1 reflector.go:213] Stopping reflector *v1.Route (10m0s) from github.com/openshift/client-go/route/informers/externalversions/factory.go:101\nI0916 02:59:49.003712       1 reflector.go:213] Stopping reflector *v1.Proxy (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0916 02:59:49.003752       1 reflector.go:213] Stopping reflector *v1.OAuthClient (10m0s) from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101\nW0916 02:59:49.003866       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Sep 16 02:59:57.831 E ns/openshift-cluster-machine-approver pod/machine-approver-6f99558c5f-x6kxs node/ip-10-0-225-82.ec2.internal container/machine-approver-controller container exited with code 2 (Error): arting cluster operator status controller\nI0916 02:39:15.627226       1 reflector.go:175] Starting reflector *v1beta1.CertificateSigningRequest (0s) from github.com/openshift/cluster-machine-approver/main.go:239\nI0916 02:39:15.627362       1 reflector.go:175] Starting reflector *v1.ClusterOperator (0s) from github.com/openshift/cluster-machine-approver/status.go:99\nI0916 02:39:15.727569       1 main.go:147] CSR csr-mctrq added\nI0916 02:39:15.727644       1 main.go:150] CSR csr-mctrq is already approved\nI0916 02:39:15.727684       1 main.go:147] CSR csr-nbd8n added\nI0916 02:39:15.727711       1 main.go:150] CSR csr-nbd8n is already approved\nI0916 02:39:15.727741       1 main.go:147] CSR csr-pc48j added\nI0916 02:39:15.727767       1 main.go:150] CSR csr-pc48j is already approved\nI0916 02:39:15.727800       1 main.go:147] CSR csr-vv8s4 added\nI0916 02:39:15.727824       1 main.go:150] CSR csr-vv8s4 is already approved\nI0916 02:39:15.727873       1 main.go:147] CSR csr-79mpc added\nI0916 02:39:15.727901       1 main.go:150] CSR csr-79mpc is already approved\nI0916 02:39:15.727930       1 main.go:147] CSR csr-fd78w added\nI0916 02:39:15.727954       1 main.go:150] CSR csr-fd78w is already approved\nI0916 02:39:15.727981       1 main.go:147] CSR csr-k66ww added\nI0916 02:39:15.728010       1 main.go:150] CSR csr-k66ww is already approved\nI0916 02:39:15.728047       1 main.go:147] CSR csr-qlhzj added\nI0916 02:39:15.736679       1 main.go:150] CSR csr-qlhzj is already approved\nI0916 02:39:15.736764       1 main.go:147] CSR csr-w6pv2 added\nI0916 02:39:15.736795       1 main.go:150] CSR csr-w6pv2 is already approved\nI0916 02:39:15.736825       1 main.go:147] CSR csr-2rd9b added\nI0916 02:39:15.736851       1 main.go:150] CSR csr-2rd9b is already approved\nI0916 02:39:15.736882       1 main.go:147] CSR csr-dr98z added\nI0916 02:39:15.736907       1 main.go:150] CSR csr-dr98z is already approved\nI0916 02:39:15.736940       1 main.go:147] CSR csr-jvtwq added\nI0916 02:39:15.736965       1 main.go:150] CSR csr-jvtwq is already approved\n
Sep 16 02:59:57.991 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-7986bb4967-dtmrj node/ip-10-0-225-82.ec2.internal container/cluster-storage-operator container exited with code 1 (Error): us_controller.go:172] clusteroperator/storage diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-16T01:57:00Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-16T02:59:47Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-16T02:59:47Z","message":"AWSEBSCSIDriverOperatorDeploymentAvailable: Waiting for a Deployment pod to start","reason":"AWSEBSCSIDriverOperatorDeployment_WaitDeployment","status":"False","type":"Available"},{"lastTransitionTime":"2020-09-16T01:57:01Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0916 02:59:47.441818       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-cluster-storage-operator", Name:"cluster-storage-operator", UID:"e38f2ea6-df51-4550-bb8d-bb5ac095483d", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/storage changed: Progressing changed from True to False ("")\nI0916 02:59:53.091813       1 cmd.go:88] Received SIGTERM or SIGINT signal, shutting down controller.\nI0916 02:59:53.092126       1 base_controller.go:147] Shutting down SnapshotCRDController ...\nI0916 02:59:53.092188       1 base_controller.go:147] Shutting down DefaultStorageClassController ...\nI0916 02:59:53.092229       1 base_controller.go:147] Shutting down StatusSyncer_storage ...\nI0916 02:59:53.092259       1 base_controller.go:147] Shutting down CSIDriverStarter ...\nI0916 02:59:53.092285       1 base_controller.go:147] Shutting down AWSEBSCSIDriverOperatorDeployment ...\nI0916 02:59:53.092307       1 base_controller.go:125] All AWSEBSCSIDriverOperatorDeployment post start hooks have been terminated\nI0916 02:59:53.092335       1 base_controller.go:147] Shutting down ManagementStateController ...\nI0916 02:59:53.092361       1 base_controller.go:147] Shutting down LoggingSyncer ...\nW0916 02:59:53.092393       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Sep 16 02:59:58.176 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-74ccbb66bb-wd5rd node/ip-10-0-225-82.ec2.internal container/snapshot-controller container exited with code 2 (Error): 
Sep 16 02:59:58.248 E ns/openshift-service-ca pod/service-ca-74dd54cd4c-t4km5 node/ip-10-0-225-82.ec2.internal container/service-ca-controller container exited with code 1 (Error): 
Sep 16 02:59:58.324 E ns/openshift-service-ca-operator pod/service-ca-operator-57cd56fbfc-8kmpr node/ip-10-0-225-82.ec2.internal container/service-ca-operator container exited with code 1 (Error): 
Sep 16 03:00:17.159 E ns/openshift-console pod/console-79f84496b5-2xwrw node/ip-10-0-225-82.ec2.internal container/console container exited with code 2 (Error): 2020-09-16T02:40:13Z cmd/main: Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\n2020-09-16T02:40:13Z cmd/main: cookies are secure!\n2020-09-16T02:40:13Z cmd/main: Binding to [::]:8443...\n2020-09-16T02:40:13Z cmd/main: using TLS\n
Sep 16 03:00:59.037 E clusteroperator/monitoring changed Degraded to True: UpdatingTelemeterclientFailed: Failed to rollout the stack. Error: running task Updating Telemeter client failed: reconciling Telemeter client Prometheus Rule failed: updating PrometheusRule object failed: Internal error occurred: failed calling webhook "prometheusrules.openshift.io": Post "https://prometheus-operator.openshift-monitoring.svc:8080/admission-prometheusrules/validate?timeout=5s": x509: certificate signed by unknown authority
Sep 16 03:02:40.312 E clusteroperator/authentication changed Degraded to True: APIServerDeployment_UnavailablePod::OAuthRouteCheckEndpointAccessibleController_SyncError::WellKnownReadyController_SyncError: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver\nOAuthRouteCheckEndpointAccessibleControllerDegraded: Get "https://oauth-openshift.apps.ci-op-dytt3k0l-21e62.origin-ci-int-aws.dev.rhcloud.com/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2
Sep 16 03:02:49.590 E ns/openshift-sdn pod/ovs-g67z6 node/ip-10-0-167-164.ec2.internal container/openvswitch container exited with code 1 (Error): w_mods in the last 0 s (4 deletes)\n2020-09-16T02:58:05.369Z|00174|bridge|INFO|bridge br0: deleted interface veth27ea73de on port 17\n2020-09-16T02:58:06.042Z|00175|connmgr|INFO|br0<->unix#288: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T02:58:06.078Z|00176|connmgr|INFO|br0<->unix#291: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T02:58:06.105Z|00177|bridge|INFO|bridge br0: deleted interface veth4ef73d53 on port 12\n2020-09-16T02:59:16.938Z|00017|jsonrpc|WARN|unix#372: receive error: Connection reset by peer\n2020-09-16T02:59:16.938Z|00018|reconnect|WARN|unix#372: connection dropped (Connection reset by peer)\n2020-09-16T03:00:17.020Z|00019|jsonrpc|WARN|unix#388: receive error: Connection reset by peer\n2020-09-16T03:00:17.020Z|00020|reconnect|WARN|unix#388: connection dropped (Connection reset by peer)\n2020-09-16T03:01:17.081Z|00021|jsonrpc|WARN|unix#404: receive error: Connection reset by peer\n2020-09-16T03:01:17.081Z|00022|reconnect|WARN|unix#404: connection dropped (Connection reset by peer)\n2020-09-16 03:01:17 info: Saving flows ...\n2020-09-16T03:01:17Z|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-vswitchd.pid: open: No such file or directory\novs-appctl: cannot read pidfile "/var/run/openvswitch/ovs-vswitchd.pid" (No such file or directory)\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\novs-ofctl: br0 is not a bridge or a socket\n/usr/share/openvswitch/scripts/ovs-save: line 136: [: -ge: unary operator expected\novs-ofctl: Unknown OpenFlow version: "dump-groups"\novs-ofctl: Unknown OpenFlow version: "dump-flows"\n2020-09-16 03:01:17 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nrm: cannot remove '/var/run/openvswitch/ovs-vswitchd.pid': No such file or directory\nFailed to connect to bus: No data available\nopenvswitch is running in container\novsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process), aborting\nStarting ovsdb-server ... failed!\n
Sep 16 03:02:51.539 E ns/openshift-sdn pod/sdn-z7hwv node/ip-10-0-167-164.ec2.internal container/sdn container exited with code 255 (Error): che/reflector.go:125\nI0916 03:01:17.334797   65064 reflector.go:181] Stopping reflector *v1.Service (30s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 03:01:17.334826   65064 reflector.go:181] Stopping reflector *v1.Namespace (30s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 03:01:17.334861   65064 reflector.go:181] Stopping reflector *v1.EgressNetworkPolicy (30m0s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 03:01:17.334895   65064 reflector.go:181] Stopping reflector *v1.NetNamespace (30m0s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 03:02:48.237812    1898 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0916 03:02:48.242100    1898 feature_gate.go:243] feature gates: &{map[]}\nI0916 03:02:48.242152    1898 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0916 03:02:48.242226    1898 cmd.go:216] Watching config file /config/..2020_09_16_02_45_43.282733984/kube-proxy-config.yaml for changes\nI0916 03:02:48.359709    1898 node.go:150] Initializing SDN node "ip-10-0-167-164.ec2.internal" (10.0.167.164) of type "redhat/openshift-ovs-networkpolicy"\nI0916 03:02:48.384040    1898 cmd.go:159] Starting node networking (v0.0.0-alpha.0-203-g2e41615)\nI0916 03:02:48.384082    1898 node.go:338] Starting openshift-sdn network plugin\nI0916 03:02:48.793817    1898 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0916 03:02:48.942450    1898 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0916 03:02:49.449094    1898 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0916 03:02:50.079373    1898 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0916 03:02:51.267531    1898 cmd.go:111] Failed to start sdn: node SDN setup failed: Link not found\n
Sep 16 03:02:54.582 E clusteroperator/dns changed Degraded to True: DNSDegraded: DNS default is degraded
Sep 16 03:02:56.540 E ns/openshift-sdn pod/sdn-z7hwv node/ip-10-0-167-164.ec2.internal container/kube-rbac-proxy container exited with code 1 (Error):  0 --:--:-- --:--:-- --:--:--  174k\nI0916 02:45:44.739399   65195 main.go:188] Valid token audiences: \nI0916 02:45:44.739509   65195 main.go:261] Reading certificate files\nI0916 02:45:44.740818   65195 main.go:294] Starting TCP socket on :9101\nI0916 02:45:44.741658   65195 main.go:301] Listening securely on :9101\nI0916 03:01:17.285510   65195 main.go:356] received interrupt, shutting down\nE0916 03:01:17.285675   65195 main.go:309] failed to gracefully close secure listener: close tcp [::]:9101: use of closed network connection\n  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0curl: (7) Failed to connect to 172.30.0.1 port 443: No route to host\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\n  File "/usr/lib64/python3.6/json/__init__.py", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads\n    return _default_decoder.decode(s)\n  File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode\n    raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n
Sep 16 03:02:58.772 E ns/openshift-sdn pod/ovs-s222r node/ip-10-0-225-82.ec2.internal container/openvswitch container exited with code 1 (Error): 00|connmgr|INFO|br0<->unix#407: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T03:00:05.290Z|00201|connmgr|INFO|br0<->unix#410: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T03:00:05.335Z|00202|bridge|INFO|bridge br0: deleted interface veth8a5d24e7 on port 95\n2020-09-16T03:00:14.205Z|00203|connmgr|INFO|br0<->unix#416: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T03:00:14.243Z|00204|connmgr|INFO|br0<->unix#419: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T03:00:14.289Z|00205|bridge|INFO|bridge br0: deleted interface veth71bd44e6 on port 76\n2020-09-16T03:00:16.392Z|00206|connmgr|INFO|br0<->unix#422: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T03:00:16.420Z|00207|connmgr|INFO|br0<->unix#425: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T03:00:16.448Z|00208|bridge|INFO|bridge br0: deleted interface veth1b4fa726 on port 78\n2020-09-16T03:00:21.212Z|00209|bridge|INFO|bridge br0: added interface veth15dbdf5b on port 96\n2020-09-16T03:00:21.272Z|00210|connmgr|INFO|br0<->unix#428: 5 flow_mods in the last 0 s (5 adds)\n2020-09-16T03:00:21.341Z|00211|connmgr|INFO|br0<->unix#432: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T03:00:21.343Z|00212|connmgr|INFO|br0<->unix#434: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-09-16T03:00:26.218Z|00213|connmgr|INFO|br0<->unix#437: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T03:00:26.242Z|00214|connmgr|INFO|br0<->unix#440: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T03:00:26.271Z|00215|bridge|INFO|bridge br0: deleted interface veth15dbdf5b on port 96\n2020-09-16 03:01:47 info: Saving flows ...\n2020-09-16 03:01:47 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nrm: cannot remove '/var/run/openvswitch/ovs-vswitchd.pid': No such file or directory\nFailed to connect to bus: No data available\nopenvswitch is running in container\novsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process), aborting\nStarting ovsdb-server ... failed!\n
Sep 16 03:03:02.759 E ns/openshift-sdn pod/sdn-jpcvk node/ip-10-0-225-82.ec2.internal container/sdn container exited with code 255 (Error): admission-controller:metrics to [10.128.0.82:8443 10.129.0.95:8443 10.130.0.68:8443]\nI0916 02:46:23.340627  100265 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.82:6443 10.129.0.95:6443 10.130.0.68:6443]\nI0916 02:46:23.517036  100265 proxier.go:370] userspace proxy: processing 0 service events\nI0916 02:46:23.519201  100265 proxier.go:349] userspace syncProxyRules took 38.134153ms\nI0916 02:46:26.293777  100265 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0916 02:46:32.065769  100265 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\nI0916 03:02:58.534778    3100 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0916 03:02:58.540089    3100 feature_gate.go:243] feature gates: &{map[]}\nI0916 03:02:58.540213    3100 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0916 03:02:58.540283    3100 cmd.go:216] Watching config file /config/..2020_09_16_02_44_55.557878700/kube-proxy-config.yaml for changes\nI0916 03:02:58.693324    3100 node.go:150] Initializing SDN node "ip-10-0-225-82.ec2.internal" (10.0.225.82) of type "redhat/openshift-ovs-networkpolicy"\nI0916 03:02:58.755462    3100 cmd.go:159] Starting node networking (v0.0.0-alpha.0-203-g2e41615)\nI0916 03:02:58.755642    3100 node.go:338] Starting openshift-sdn network plugin\nI0916 03:02:59.365010    3100 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0916 03:02:59.665223    3100 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0916 03:03:00.169550    3100 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0916 03:03:01.911991    3100 cmd.go:111] Failed to start sdn: node SDN setup failed: Link not found\n
Sep 16 03:03:08.728 E ns/openshift-monitoring pod/grafana-84c4f478db-s97l7 node/ip-10-0-174-42.ec2.internal container/grafana-proxy container exited with code 2 (Error): 
Sep 16 03:03:08.790 E ns/openshift-marketplace pod/redhat-marketplace-kvg5q node/ip-10-0-174-42.ec2.internal container/registry-server container exited with code 2 (Error): 
Sep 16 03:03:08.831 E ns/openshift-marketplace pod/redhat-operators-gs84h node/ip-10-0-174-42.ec2.internal container/registry-server container exited with code 2 (Error): 
Sep 16 03:03:08.984 E ns/openshift-sdn pod/sdn-jpcvk node/ip-10-0-225-82.ec2.internal container/kube-rbac-proxy container exited with code 1 (Error):  the request due to an error: Post "https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp 172.30.0.1:443: connect: connection refused\nI0916 03:01:47.131147  100323 main.go:356] received interrupt, shutting down\nE0916 03:01:47.131355  100323 main.go:309] failed to gracefully close secure listener: close tcp [::]:9101: use of closed network connection\n  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0curl: (7) Failed to connect to 172.30.0.1 port 443: No route to host\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\n  File "/usr/lib64/python3.6/json/__init__.py", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads\n    return _default_decoder.decode(s)\n  File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode\n    raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n
Sep 16 03:03:09.869 E ns/openshift-monitoring pod/kube-state-metrics-75545bc787-szrsp node/ip-10-0-174-42.ec2.internal container/kube-state-metrics container exited with code 2 (Error): 
Sep 16 03:03:09.949 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-174-42.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/09/16 02:54:18 Watching directory: "/etc/alertmanager/config"\n
Sep 16 03:03:09.949 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-174-42.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/09/16 02:54:19 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/16 02:54:19 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/16 02:54:19 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/16 02:54:19 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/16 02:54:19 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/09/16 02:54:19 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/16 02:54:19 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/09/16 02:54:19 http.go:107: HTTPS: listening on [::]:9095\nI0916 02:54:19.734331       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 16 03:03:09.977 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-174-42.ec2.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/16 02:54:37 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 16 03:03:09.977 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-174-42.ec2.internal container/prometheus-proxy container exited with code 2 (Error): 2020/09/16 02:54:38 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/16 02:54:38 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/16 02:54:38 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/16 02:54:38 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/16 02:54:38 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/09/16 02:54:38 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/16 02:54:38 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/09/16 02:54:38 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/16 02:54:38 http.go:107: HTTPS: listening on [::]:9091\nI0916 02:54:38.130222       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/16 02:57:33 oauthproxy.go:785: basicauth: 10.129.0.15:44838 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:57:33 oauthproxy.go:785: basicauth: 10.129.0.15:44838 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/16 02:57:40 oauthproxy.go:785: basicauth: 10.131.0.28:47326 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 16 03:03:09.977 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-174-42.ec2.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-16T02:54:37.07531704Z caller=main.go:87 msg="Starting prometheus-config-reloader version 'rhel-8-golang-openshift-4.6'."\nlevel=error ts=2020-09-16T02:54:37.077676758Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post \"http://localhost:9090/-/reload\": dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-16T02:54:42.334228911Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-16T02:54:42.33431639Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Sep 16 03:03:10.107 E ns/openshift-marketplace pod/redhat-operators-s7g5q node/ip-10-0-174-42.ec2.internal container/registry-server container exited with code 2 (Error): 
Sep 16 03:03:11.067 E ns/openshift-monitoring pod/telemeter-client-74f6fbc685-8jmtp node/ip-10-0-174-42.ec2.internal container/telemeter-client container exited with code 2 (Error): 
Sep 16 03:03:11.067 E ns/openshift-monitoring pod/telemeter-client-74f6fbc685-8jmtp node/ip-10-0-174-42.ec2.internal container/reload container exited with code 2 (Error): 
Sep 16 03:03:16.633 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Sep 16 03:03:21.962 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-167-164.ec2.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-09-16T03:03:19.823Z caller=main.go:285 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 16 03:05:56.003 E ns/openshift-sdn pod/ovs-fvzgz node/ip-10-0-174-42.ec2.internal container/openvswitch container exited with code 1 (Error): 138Z|00150|connmgr|INFO|br0<->unix#333: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T03:03:11.168Z|00151|bridge|INFO|bridge br0: deleted interface veth0b303945 on port 44\n2020-09-16T03:03:11.216Z|00152|connmgr|INFO|br0<->unix#336: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T03:03:11.256Z|00153|connmgr|INFO|br0<->unix#339: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T03:03:11.282Z|00154|bridge|INFO|bridge br0: deleted interface vethafa2d889 on port 40\n2020-09-16T03:03:11.330Z|00155|connmgr|INFO|br0<->unix#342: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T03:03:11.366Z|00156|connmgr|INFO|br0<->unix#345: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T03:03:11.394Z|00157|bridge|INFO|bridge br0: deleted interface veth95fccdf3 on port 8\n2020-09-16T03:03:52.047Z|00015|jsonrpc|WARN|unix#490: receive error: Connection reset by peer\n2020-09-16T03:03:52.047Z|00016|reconnect|WARN|unix#490: connection dropped (Connection reset by peer)\n2020-09-16T03:03:51.501Z|00158|connmgr|INFO|br0<->unix#351: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T03:03:51.551Z|00159|connmgr|INFO|br0<->unix#354: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T03:03:51.582Z|00160|bridge|INFO|bridge br0: deleted interface vethb620ffa3 on port 31\n2020-09-16T03:03:51.992Z|00161|connmgr|INFO|br0<->unix#357: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-16T03:03:52.028Z|00162|connmgr|INFO|br0<->unix#360: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-16T03:03:52.055Z|00163|bridge|INFO|bridge br0: deleted interface veth6bf3e59a on port 26\n2020-09-16 03:04:29 info: Saving flows ...\n2020-09-16 03:04:29 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nrm: cannot remove '/var/run/openvswitch/ovs-vswitchd.pid': No such file or directory\nFailed to connect to bus: No data available\nopenvswitch is running in container\novsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process), aborting\nStarting ovsdb-server ... failed!\n
Sep 16 03:05:59.086 E ns/openshift-sdn pod/sdn-cwrg5 node/ip-10-0-174-42.ec2.internal container/sdn container exited with code 255 (Error): 30m0s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 03:04:29.730519  113046 reflector.go:181] Stopping reflector *v1.EgressNetworkPolicy (30m0s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 03:04:29.730576  113046 reflector.go:181] Stopping reflector *v1.Service (30s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 03:04:29.730618  113046 reflector.go:181] Stopping reflector *v1.Namespace (30s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 03:04:29.730675  113046 reflector.go:181] Stopping reflector *v1.NetworkPolicy (30s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0916 03:04:30.190072  113046 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0916 03:05:55.752601    1951 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0916 03:05:55.762446    1951 feature_gate.go:243] feature gates: &{map[]}\nI0916 03:05:55.762514    1951 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0916 03:05:55.762559    1951 cmd.go:216] Watching config file /config/..2020_09_16_02_44_28.986627829/kube-proxy-config.yaml for changes\nI0916 03:05:55.824175    1951 node.go:150] Initializing SDN node "ip-10-0-174-42.ec2.internal" (10.0.174.42) of type "redhat/openshift-ovs-networkpolicy"\nI0916 03:05:55.843464    1951 cmd.go:159] Starting node networking (v0.0.0-alpha.0-203-g2e41615)\nI0916 03:05:55.843591    1951 node.go:338] Starting openshift-sdn network plugin\nI0916 03:05:56.228169    1951 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0916 03:05:56.373437    1951 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0916 03:05:56.878924    1951 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0916 03:05:58.101514    1951 cmd.go:111] Failed to start sdn: node SDN setup failed: Link not found\n
Sep 16 03:06:04.145 E ns/openshift-sdn pod/sdn-cwrg5 node/ip-10-0-174-42.ec2.internal container/kube-rbac-proxy container exited with code 1 (Error):  0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0curl: (7) Failed to connect to 172.30.0.1 port 443: No route to host\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\n  File "/usr/lib64/python3.6/json/__init__.py", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads\n    return _default_decoder.decode(s)\n  File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode\n    raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n