ResultSUCCESS
Tests 4 failed / 21 succeeded
Started2020-07-17 13:41
Elapsed1h41m
Work namespaceci-op-80zpv3cc
Refs openshift-4.3:e30a18f2
53:7e95ccff
pod469e505d-c833-11ea-a58c-0a580a83046e
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 45m44s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 6s of 32m40s (0%):

Jul 17 14:40:43.640 E ns/e2e-k8s-service-lb-available-6750 svc/service-test Service stopped responding to GET requests over new connections
Jul 17 14:40:44.640 - 2s    E ns/e2e-k8s-service-lb-available-6750 svc/service-test Service is not responding to GET requests over new connections
Jul 17 14:40:46.945 I ns/e2e-k8s-service-lb-available-6750 svc/service-test Service started responding to GET requests over new connections
Jul 17 14:41:13.640 E ns/e2e-k8s-service-lb-available-6750 svc/service-test Service stopped responding to GET requests over new connections
Jul 17 14:41:13.861 I ns/e2e-k8s-service-lb-available-6750 svc/service-test Service started responding to GET requests over new connections
Jul 17 14:41:34.640 E ns/e2e-k8s-service-lb-available-6750 svc/service-test Service stopped responding to GET requests on reused connections
Jul 17 14:41:34.840 I ns/e2e-k8s-service-lb-available-6750 svc/service-test Service started responding to GET requests on reused connections
Jul 17 14:42:19.640 E ns/e2e-k8s-service-lb-available-6750 svc/service-test Service stopped responding to GET requests on reused connections
Jul 17 14:42:19.836 I ns/e2e-k8s-service-lb-available-6750 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1594998857.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 35m13s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 2m20s of 35m13s (7%):

Jul 17 14:40:17.263 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 17 14:40:17.264 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 17 14:40:17.612 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 17 14:40:18.260 - 3s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Jul 17 14:40:22.615 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 17 14:40:41.261 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 17 14:40:41.636 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 17 14:40:46.615 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 17 14:40:47.260 - 4s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Jul 17 14:40:51.976 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 17 14:41:13.261 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 17 14:41:13.261 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Jul 17 14:41:14.260 - 1s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Jul 17 14:41:14.260 - 1s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Jul 17 14:41:15.019 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 17 14:41:15.260 E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Jul 17 14:41:15.397 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 17 14:41:15.573 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 17 14:41:15.600 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 17 14:41:15.671 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Jul 17 14:41:16.260 E ns/openshift-console route/console Route is not responding to GET requests over new connections
Jul 17 14:41:16.620 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 17 14:41:34.262 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Jul 17 14:41:35.260 - 8s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Jul 17 14:41:43.806 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Jul 17 14:41:47.261 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 17 14:41:47.614 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 17 14:41:47.621 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 17 14:41:48.260 - 16s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Jul 17 14:42:02.815 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 17 14:42:03.260 - 1s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Jul 17 14:42:04.345 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 17 14:42:05.036 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 17 14:42:05.260 E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Jul 17 14:42:05.351 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 17 14:42:05.504 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 17 14:42:15.505 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 17 14:42:16.260 - 6s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Jul 17 14:42:16.261 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 17 14:42:16.608 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 17 14:42:20.261 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Jul 17 14:42:21.260 - 1s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Jul 17 14:42:23.667 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Jul 17 14:42:23.926 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 17 14:52:03.261 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 17 14:52:03.631 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 17 14:54:06.261 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 17 14:54:07.260 - 18s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Jul 17 14:54:12.261 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 17 14:54:13.260 - 2s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Jul 17 14:54:15.876 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 17 14:54:25.807 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 17 14:54:25.871 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 17 14:54:26.260 - 8s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Jul 17 14:54:34.261 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 17 14:54:34.612 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 17 14:54:34.964 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 17 14:55:39.261 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 17 14:55:39.644 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 17 14:57:32.587 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 17 14:57:33.260 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Jul 17 14:57:43.597 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 17 14:57:47.621 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 17 14:57:48.260 - 4s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Jul 17 14:57:52.971 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 17 14:57:54.261 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 17 14:57:54.608 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 17 14:57:55.261 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 17 14:57:55.632 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 17 14:58:06.263 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 17 14:58:07.260 - 8s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Jul 17 14:58:08.261 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 17 14:58:09.260 - 8s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Jul 17 14:58:09.261 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 17 14:58:09.627 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 17 14:58:16.603 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 17 14:58:18.616 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 17 14:58:27.261 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 17 14:58:27.610 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 17 14:59:21.261 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 17 14:59:21.261 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 17 14:59:21.619 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 17 14:59:21.625 I ns/openshift-console route/console Route started responding to GET requests on reused connections
				from junit_upgrade_1594998857.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 35m13s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 57s of 35m13s (3%):

Jul 17 14:40:47.073 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 17 14:40:48.072 - 10s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 14:40:58.919 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:42:05.073 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 17 14:42:05.159 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:42:39.073 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jul 17 14:42:39.158 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:52:38.073 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 17 14:52:39.072 - 9s    E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 14:52:49.817 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:53:05.073 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 17 14:53:05.157 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:56:24.073 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jul 17 14:56:24.159 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:56:41.073 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 17 14:56:41.161 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:56:57.073 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 17 14:56:57.160 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:57:04.904 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 17 14:57:05.072 - 9s    E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 14:57:14.228 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:57:17.192 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 17 14:57:18.072 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 14:57:20.351 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:57:23.337 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 17 14:57:23.424 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:57:29.480 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 17 14:57:30.072 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 14:57:30.160 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:57:35.624 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 17 14:57:35.710 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:57:38.696 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 17 14:57:39.072 - 9s    E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 14:57:48.457 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 14:59:43.259 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 44.232.174.202:6443: connect: connection refused
Jul 17 14:59:44.072 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 14:59:59.163 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 15:00:15.073 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 17 15:00:15.163 I openshift-apiserver OpenShift API started responding to GET requests
Jul 17 15:00:19.240 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 17 15:00:20.072 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 15:00:22.401 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1594998857.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 45m50s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
205 error level events were detected during this test run:

Jul 17 14:31:18.463 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 508)
Jul 17 14:33:09.847 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-7b8ccb4db4-287m2 node/ip-10-0-130-128.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): ernalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 13557 (18627)\nW0717 14:28:07.321543       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 13465 (18592)\nW0717 14:28:07.321608       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 13539 (18603)\nW0717 14:28:07.321657       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 17513 (18949)\nW0717 14:28:07.321687       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 17100 (18529)\nW0717 14:28:07.321715       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 12714 (18526)\nW0717 14:28:07.345125       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 17099 (18526)\nW0717 14:28:07.345205       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Role ended with: too old resource version: 16965 (18530)\nW0717 14:28:07.387675       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 15290 (18526)\nW0717 14:28:07.395544       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 17099 (18526)\nW0717 14:31:41.542743       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20051 (20765)\nI0717 14:33:09.082423       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0717 14:33:09.082473       1 leaderelection.go:66] leaderelection lost\n
Jul 17 14:33:52.956 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-128.us-west-2.compute.internal node/ip-10-0-130-128.us-west-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): 33:52.183061       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0717 14:33:52.183065       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0717 14:33:52.183068       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0717 14:33:52.183072       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0717 14:33:52.183075       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0717 14:33:52.183080       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0717 14:33:52.183086       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0717 14:33:52.183092       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0717 14:33:52.183096       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0717 14:33:52.183100       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0717 14:33:52.183107       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0717 14:33:52.183114       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0717 14:33:52.183120       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0717 14:33:52.183124       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0717 14:33:52.183145       1 server.go:692] external host was not specified, using 10.0.130.128\nI0717 14:33:52.183262       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0717 14:33:52.183462       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 17 14:34:14.093 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-128.us-west-2.compute.internal node/ip-10-0-130-128.us-west-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): 34:13.145439       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0717 14:34:13.145446       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0717 14:34:13.145452       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0717 14:34:13.145459       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0717 14:34:13.145466       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0717 14:34:13.145472       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0717 14:34:13.145478       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0717 14:34:13.145484       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0717 14:34:13.145489       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0717 14:34:13.145496       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0717 14:34:13.145506       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0717 14:34:13.145518       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0717 14:34:13.145526       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0717 14:34:13.145535       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0717 14:34:13.145566       1 server.go:692] external host was not specified, using 10.0.130.128\nI0717 14:34:13.145674       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0717 14:34:13.145837       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 17 14:34:49.156 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-128.us-west-2.compute.internal node/ip-10-0-130-128.us-west-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): 34:48.803599       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0717 14:34:48.803606       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0717 14:34:48.803612       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0717 14:34:48.803618       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0717 14:34:48.803624       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0717 14:34:48.803631       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0717 14:34:48.803638       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0717 14:34:48.803644       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0717 14:34:48.803650       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0717 14:34:48.803657       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0717 14:34:48.803667       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0717 14:34:48.803675       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0717 14:34:48.803682       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0717 14:34:48.803690       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0717 14:34:48.803721       1 server.go:692] external host was not specified, using 10.0.130.128\nI0717 14:34:48.803830       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0717 14:34:48.804452       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 17 14:34:49.922 E ns/openshift-machine-api pod/machine-api-operator-6947f66695-lqg6l node/ip-10-0-142-167.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Jul 17 14:34:55.283 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-128.us-west-2.compute.internal node/ip-10-0-130-128.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 255 (Error): I0717 14:34:54.896400       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0717 14:34:54.898716       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0717 14:34:54.898822       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0717 14:34:54.899269       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 17 14:35:13.267 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-128.us-west-2.compute.internal node/ip-10-0-130-128.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 255 (Error): I0717 14:35:13.103407       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0717 14:35:13.104903       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0717 14:35:13.104943       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0717 14:35:13.105361       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 17 14:36:18.221 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): 36:17.441738       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0717 14:36:17.441742       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0717 14:36:17.441746       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0717 14:36:17.441749       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0717 14:36:17.441753       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0717 14:36:17.441756       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0717 14:36:17.441760       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0717 14:36:17.441764       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0717 14:36:17.441767       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0717 14:36:17.441771       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0717 14:36:17.441777       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0717 14:36:17.441781       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0717 14:36:17.441786       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0717 14:36:17.441790       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0717 14:36:17.441812       1 server.go:692] external host was not specified, using 10.0.142.167\nI0717 14:36:17.441978       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0717 14:36:17.443967       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 17 14:36:21.244 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 255 (Error): I0717 14:36:20.360639       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0717 14:36:20.362053       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0717 14:36:20.362165       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0717 14:36:20.362541       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 17 14:36:35.989 E ns/openshift-cluster-node-tuning-operator pod/tuned-9kzvh node/ip-10-0-128-139.us-west-2.compute.internal container=tuned container exited with code 143 (Error): go:441] Getting recommended profile...\nI0717 14:27:58.247664    2191 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:28:34.221899    2191 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-daemonset-upgrade-9323/ds1-v8r7w) labels changed node wide: true\nI0717 14:28:38.100820    2191 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:28:38.102195    2191 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:28:38.281530    2191 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:28:38.745178    2191 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-6750/service-test-dmhqt) labels changed node wide: true\nI0717 14:28:43.100844    2191 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:28:43.102303    2191 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:28:43.232908    2191 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:28:45.525656    2191 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-8231/dp-857d95bf59-szclq) labels changed node wide: true\nI0717 14:28:48.100808    2191 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:28:48.102453    2191 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:28:48.255867    2191 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:29:51.095370    2191 openshift-tuned.go:852] Lowering resyncPeriod to 59\nI0717 14:34:56.338442    2191 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0717 14:34:56.344642    2191 openshift-tuned.go:881] Pod event watch channel closed.\nI0717 14:34:56.344667    2191 openshift-tuned.go:883] Increasing resyncPeriod to 118\n
Jul 17 14:36:39.304 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 255 (Error): I0717 14:36:39.221180       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0717 14:36:39.222476       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0717 14:36:39.222491       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0717 14:36:39.223098       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 17 14:36:41.324 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): 36:40.248373       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0717 14:36:40.248377       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0717 14:36:40.248380       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0717 14:36:40.248385       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0717 14:36:40.248388       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0717 14:36:40.248392       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0717 14:36:40.248395       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0717 14:36:40.248399       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0717 14:36:40.248402       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0717 14:36:40.248406       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0717 14:36:40.248412       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0717 14:36:40.248417       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0717 14:36:40.248436       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0717 14:36:40.248441       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0717 14:36:40.248465       1 server.go:692] external host was not specified, using 10.0.142.167\nI0717 14:36:40.248567       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0717 14:36:40.248744       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 17 14:36:56.166 E ns/openshift-machine-api pod/machine-api-controllers-59d757f9f6-b77s6 node/ip-10-0-153-242.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 17 14:37:06.690 E ns/openshift-cluster-machine-approver pod/machine-approver-b74c7c576-k7csr node/ip-10-0-130-128.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): sts?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0717 14:35:28.361529       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0717 14:35:29.362053       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0717 14:35:30.362580       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0717 14:35:31.363104       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0717 14:35:32.363659       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0717 14:35:33.364175       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Jul 17 14:37:09.461 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): 37:09.241479       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0717 14:37:09.241486       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0717 14:37:09.241492       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0717 14:37:09.241499       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0717 14:37:09.241505       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0717 14:37:09.241511       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0717 14:37:09.241517       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0717 14:37:09.241524       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0717 14:37:09.241530       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0717 14:37:09.241536       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0717 14:37:09.241546       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0717 14:37:09.241553       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0717 14:37:09.241562       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0717 14:37:09.241570       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0717 14:37:09.241602       1 server.go:692] external host was not specified, using 10.0.142.167\nI0717 14:37:09.241716       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0717 14:37:09.241868       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 17 14:37:21.588 E ns/openshift-monitoring pod/node-exporter-nmnd2 node/ip-10-0-149-163.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-17T14:21:35Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-17T14:21:35Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 17 14:37:22.993 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Could not update deployment "openshift-cluster-samples-operator/cluster-samples-operator" (256 of 508)\n* Could not update deployment "openshift-console/downloads" (326 of 508)\n* Could not update deployment "openshift-machine-api/cluster-autoscaler-operator" (180 of 508)\n* Could not update deployment "openshift-marketplace/marketplace-operator" (385 of 508)\n* Could not update deployment "openshift-operator-lifecycle-manager/olm-operator" (364 of 508)\n* Could not update deployment "openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator" (284 of 508)
Jul 17 14:37:25.708 E kube-apiserver Kube API started failing: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: dial tcp 35.155.23.131:6443: connect: connection refused
Jul 17 14:37:29.153 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-128-139.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/17 14:23:35 Watching directory: "/etc/alertmanager/config"\n
Jul 17 14:37:29.153 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-128-139.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/17 14:23:35 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/17 14:23:35 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/17 14:23:35 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/17 14:23:35 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/17 14:23:35 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/17 14:23:35 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/17 14:23:35 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/17 14:23:35 http.go:106: HTTPS: listening on [::]:9095\n
Jul 17 14:37:37.651 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-242.us-west-2.compute.internal node/ip-10-0-153-242.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 255 (Error): I0717 14:37:36.899564       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0717 14:37:36.901793       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0717 14:37:36.902760       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0717 14:37:36.904215       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 17 14:37:45.644 E ns/openshift-controller-manager pod/controller-manager-tlsg5 node/ip-10-0-153-242.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Jul 17 14:37:48.200 E ns/openshift-monitoring pod/node-exporter-pqmh6 node/ip-10-0-128-139.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-17T14:21:40Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-17T14:21:40Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 17 14:37:48.230 E ns/openshift-monitoring pod/openshift-state-metrics-f5c6cf574-srk9l node/ip-10-0-128-139.us-west-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jul 17 14:37:48.716 E ns/openshift-monitoring pod/telemeter-client-dd44cdf8-jj4rk node/ip-10-0-149-163.us-west-2.compute.internal container=reload container exited with code 2 (Error): 
Jul 17 14:37:48.716 E ns/openshift-monitoring pod/telemeter-client-dd44cdf8-jj4rk node/ip-10-0-149-163.us-west-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Jul 17 14:37:52.686 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-242.us-west-2.compute.internal node/ip-10-0-153-242.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 255 (Error): I0717 14:37:52.211514       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0717 14:37:52.212955       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0717 14:37:52.213072       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0717 14:37:52.213634       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 17 14:37:59.032 E ns/openshift-monitoring pod/node-exporter-sdhfr node/ip-10-0-130-128.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-17T14:17:30Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 17 14:38:05.744 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 255 (Error): bac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found]\nE0717 14:38:00.635002       1 leaderelection.go:341] Failed to update lock: configmaps "cluster-policy-controller" is forbidden: User "system:kube-controller-manager" cannot update resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0717 14:38:05.262827       1 event.go:247] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-142-167 stopped leading'\nI0717 14:38:05.262928       1 leaderelection.go:263] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0717 14:38:05.262968       1 policy_controller.go:94] leaderelection lost\n
Jul 17 14:38:06.811 E ns/openshift-monitoring pod/node-exporter-kfl9k node/ip-10-0-137-105.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-17T14:21:44Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-17T14:21:44Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 17 14:38:06.837 E ns/openshift-monitoring pod/prometheus-adapter-66bc57bf98-xl555 node/ip-10-0-137-105.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0717 14:22:48.751063       1 adapter.go:93] successfully using in-cluster auth\nI0717 14:22:49.278028       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 17 14:38:16.815 E ns/openshift-monitoring pod/node-exporter-lj4tq node/ip-10-0-142-167.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-17T14:17:35Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-17T14:17:35Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 17 14:38:22.852 E ns/openshift-controller-manager pod/controller-manager-7crbs node/ip-10-0-142-167.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Jul 17 14:38:24.804 E ns/openshift-monitoring pod/node-exporter-glkw2 node/ip-10-0-153-242.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-17T14:17:30Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-17T14:17:30Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 17 14:38:30.924 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-137-105.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/17 14:23:55 Watching directory: "/etc/alertmanager/config"\n
Jul 17 14:38:30.924 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-137-105.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/17 14:23:55 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/17 14:23:55 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/17 14:23:55 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/17 14:23:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/17 14:23:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/17 14:23:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/17 14:23:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/17 14:23:55 http.go:106: HTTPS: listening on [::]:9095\n
Jul 17 14:38:32.036 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-163.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-17T14:38:12.425Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-07-17T14:38:12.433Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-17T14:38:12.433Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-17T14:38:12.434Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-17T14:38:12.434Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-07-17T14:38:12.434Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-17T14:38:12.435Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-17T14:38:12.435Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-17T14:38:12.435Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-17T14:38:12.435Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-17T14:38:12.435Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-17T14:38:12.435Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-17T14:38:12.435Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-17T14:38:12.435Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-07-17T14:38:12.436Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-17T14:38:12.436Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-07-17
Jul 17 14:38:33.935 E ns/openshift-monitoring pod/thanos-querier-8665757b7b-c5m6f node/ip-10-0-137-105.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/07/17 14:24:35 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/17 14:24:35 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/17 14:24:35 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/17 14:24:35 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/17 14:24:35 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/17 14:24:35 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/17 14:24:35 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/17 14:24:35 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/17 14:24:35 http.go:106: HTTPS: listening on [::]:9091\n
Jul 17 14:38:35.486 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-139.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/07/17 14:24:58 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jul 17 14:38:35.486 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-139.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/07/17 14:24:58 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/17 14:24:58 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/17 14:24:58 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/17 14:24:58 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/17 14:24:58 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/17 14:24:58 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/17 14:24:58 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/17 14:24:58 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/17 14:24:58 http.go:106: HTTPS: listening on [::]:9091\n
Jul 17 14:38:35.486 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-139.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-17T14:24:58.120178964Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-07-17T14:24:58.12030705Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-07-17T14:24:58.122062546Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-17T14:25:03.274624782Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jul 17 14:38:38.879 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-242.us-west-2.compute.internal node/ip-10-0-153-242.us-west-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): 38:38.166330       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0717 14:38:38.166334       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0717 14:38:38.166340       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0717 14:38:38.166346       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0717 14:38:38.166352       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0717 14:38:38.166356       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0717 14:38:38.166360       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0717 14:38:38.166364       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0717 14:38:38.166367       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0717 14:38:38.166371       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0717 14:38:38.166377       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0717 14:38:38.166383       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0717 14:38:38.166388       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0717 14:38:38.166393       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0717 14:38:38.166415       1 server.go:692] external host was not specified, using 10.0.153.242\nI0717 14:38:38.166531       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0717 14:38:38.166958       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 17 14:38:48.570 E ns/openshift-ingress pod/router-default-5d4f4b9498-h2p2f node/ip-10-0-128-139.us-west-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:38:01.840576       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:38:06.856626       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:38:11.846714       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:38:16.849518       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:38:21.839783       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:38:26.837356       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:38:31.850910       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:38:36.841001       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:38:41.843531       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:38:46.846589       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Jul 17 14:38:53.606 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-139.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-07-17T14:38:48.163Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-17T14:38:48.168Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-17T14:38:48.168Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-17T14:38:48.169Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-17T14:38:48.169Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-07-17T14:38:48.169Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-17T14:38:48.169Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-17T14:38:48.169Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-17T14:38:48.169Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-17T14:38:48.169Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-17T14:38:48.169Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-17T14:38:48.169Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-07-17T14:38:48.169Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-17T14:38:48.169Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-17T14:38:48.171Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-17T14:38:48.171Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-07-17
Jul 17 14:39:00.036 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-242.us-west-2.compute.internal node/ip-10-0-153-242.us-west-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): 38:59.334962       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0717 14:38:59.334969       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0717 14:38:59.334973       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0717 14:38:59.334977       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0717 14:38:59.334981       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0717 14:38:59.334985       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0717 14:38:59.334989       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0717 14:38:59.334995       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0717 14:38:59.335000       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0717 14:38:59.335004       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0717 14:38:59.335010       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0717 14:38:59.335014       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0717 14:38:59.335019       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0717 14:38:59.335024       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0717 14:38:59.335052       1 server.go:692] external host was not specified, using 10.0.153.242\nI0717 14:38:59.335169       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0717 14:38:59.335330       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 17 14:39:00.968 E ns/openshift-controller-manager pod/controller-manager-bv75r node/ip-10-0-153-242.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Jul 17 14:39:15.676 E ns/openshift-marketplace pod/certified-operators-769bf4c54-jthnr node/ip-10-0-128-139.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jul 17 14:39:29.091 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-242.us-west-2.compute.internal node/ip-10-0-153-242.us-west-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): 39:28.214684       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0717 14:39:28.214688       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0717 14:39:28.214692       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0717 14:39:28.214696       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0717 14:39:28.214700       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0717 14:39:28.214703       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0717 14:39:28.214707       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0717 14:39:28.214711       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0717 14:39:28.214714       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0717 14:39:28.214718       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0717 14:39:28.214724       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0717 14:39:28.214729       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0717 14:39:28.214734       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0717 14:39:28.214739       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0717 14:39:28.214762       1 server.go:692] external host was not specified, using 10.0.153.242\nI0717 14:39:28.214863       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0717 14:39:28.215036       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 17 14:39:47.222 E ns/openshift-service-ca pod/service-serving-cert-signer-75974d88bc-5rdfc node/ip-10-0-153-242.us-west-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Jul 17 14:39:47.231 E ns/openshift-service-ca pod/apiservice-cabundle-injector-8665dc8d57-4n26c node/ip-10-0-153-242.us-west-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Jul 17 14:40:16.164 E ns/openshift-console pod/console-6b7db5d6b5-4wwp8 node/ip-10-0-142-167.us-west-2.compute.internal container=console container exited with code 2 (Error): und\n2020/07/17 14:23:26 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/17 14:23:36 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/17 14:23:46 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/17 14:23:56 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/17 14:24:06 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/17 14:24:16 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/17 14:24:26 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/17 14:24:36 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/17 14:24:46 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/17 14:24:56 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/17 14:25:06 cmd/main: Binding to [::]:8443...\n2020/07/17 14:25:06 cmd/main: using TLS\n
Jul 17 14:40:31.469 E ns/openshift-sdn pod/sdn-controller-pjswn node/ip-10-0-130-128.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0717 14:11:32.832995       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Jul 17 14:40:38.223 E ns/openshift-sdn pod/sdn-controller-rh4p9 node/ip-10-0-142-167.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0717 14:11:34.153801       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0717 14:15:00.437735       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.156.109:6443: i/o timeout\n
Jul 17 14:40:42.370 E ns/openshift-sdn pod/sdn-2fkk9 node/ip-10-0-153-242.us-west-2.compute.internal container=sdn container exited with code 255 (Error): 0717 14:40:15.266391    2305 proxier.go:350] userspace syncProxyRules took 30.206583ms\nI0717 14:40:15.389108    2305 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:40:15.389127    2305 proxier.go:350] userspace syncProxyRules took 28.739757ms\nI0717 14:40:23.717308    2305 roundrobin.go:270] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.130.128:6443 10.0.142.167:6443 10.0.153.242:6443]\nI0717 14:40:23.717337    2305 roundrobin.go:218] Delete endpoint 10.0.153.242:6443 for service "default/kubernetes:https"\nI0717 14:40:23.866944    2305 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:40:23.866961    2305 proxier.go:350] userspace syncProxyRules took 27.654797ms\nI0717 14:40:28.891654    2305 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.17:6443 10.130.0.4:6443]\nI0717 14:40:28.891744    2305 roundrobin.go:218] Delete endpoint 10.129.0.3:6443 for service "openshift-multus/multus-admission-controller:"\nI0717 14:40:29.046636    2305 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:40:29.046657    2305 proxier.go:350] userspace syncProxyRules took 31.34425ms\nI0717 14:40:32.766835    2305 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.130.128:6443 10.0.142.167:6443 10.0.153.242:6443]\nI0717 14:40:32.766871    2305 roundrobin.go:218] Delete endpoint 10.0.153.242:6443 for service "openshift-kube-apiserver/apiserver:https"\nI0717 14:40:32.883285    2305 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:40:32.883305    2305 proxier.go:350] userspace syncProxyRules took 25.278142ms\nI0717 14:40:36.985214    2305 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0717 14:40:41.751419    2305 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Jul 17 14:40:45.385 E ns/openshift-sdn pod/sdn-controller-wmkzv node/ip-10-0-153-242.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): :115] Allocated netid 7855001 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-9323"\nI0717 14:28:32.799770       1 vnids.go:115] Allocated netid 2329240 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-8402"\nI0717 14:28:32.819713       1 vnids.go:115] Allocated netid 305677 for namespace "e2e-k8s-service-lb-available-6750"\nI0717 14:28:32.828577       1 vnids.go:115] Allocated netid 1461436 for namespace "e2e-k8s-sig-apps-deployment-upgrade-8231"\nE0717 14:37:25.218676       1 reflector.go:280] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.NetNamespace: Get https://api-int.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/netnamespaces?allowWatchBookmarks=true&resourceVersion=19371&timeout=6m39s&timeoutSeconds=399&watch=true: dial tcp 10.0.156.109:6443: connect: connection refused\nW0717 14:37:25.248886       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 19120 (22172)\nW0717 14:37:25.399273       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 19661 (19949)\nW0717 14:37:26.222448       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 19371 (22172)\nW0717 14:39:45.922099       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 22172 (24153)\nW0717 14:39:46.730879       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 22172 (24153)\nW0717 14:39:46.745228       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 19949 (22382)\n
Jul 17 14:40:59.900 E ns/openshift-multus pod/multus-gd4k9 node/ip-10-0-128-139.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 17 14:41:03.912 E ns/openshift-sdn pod/ovs-vslr6 node/ip-10-0-128-139.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): last 0 s (5 adds)\n2020-07-17T14:38:24.452Z|00166|connmgr|INFO|br0<->unix#989: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:38:27.524Z|00167|connmgr|INFO|br0<->unix#994: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:38:27.550Z|00168|connmgr|INFO|br0<->unix#997: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:38:27.573Z|00169|bridge|INFO|bridge br0: deleted interface veth1aee7d8a on port 11\n2020-07-17T14:38:34.948Z|00170|connmgr|INFO|br0<->unix#1004: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:38:34.984Z|00171|connmgr|INFO|br0<->unix#1007: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:38:35.008Z|00172|bridge|INFO|bridge br0: deleted interface veth45d72f0e on port 18\n2020-07-17T14:38:43.447Z|00173|bridge|INFO|bridge br0: added interface veth52ee19e3 on port 29\n2020-07-17T14:38:43.488Z|00174|connmgr|INFO|br0<->unix#1019: 5 flow_mods in the last 0 s (5 adds)\n2020-07-17T14:38:43.535Z|00175|connmgr|INFO|br0<->unix#1022: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:38:47.904Z|00176|connmgr|INFO|br0<->unix#1027: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:38:47.946Z|00177|connmgr|INFO|br0<->unix#1030: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:38:47.991Z|00178|bridge|INFO|bridge br0: deleted interface veth452dfad4 on port 7\n2020-07-17T14:39:07.753Z|00179|connmgr|INFO|br0<->unix#1045: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:39:07.789Z|00180|connmgr|INFO|br0<->unix#1048: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:39:07.812Z|00181|bridge|INFO|bridge br0: deleted interface veth62a1b043 on port 12\n2020-07-17T14:39:14.997Z|00182|connmgr|INFO|br0<->unix#1058: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:39:15.034Z|00183|connmgr|INFO|br0<->unix#1061: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:39:15.057Z|00184|bridge|INFO|bridge br0: deleted interface veth7ce74be2 on port 13\n2020-07-17 14:41:03 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 17 14:41:25.280 E ns/openshift-sdn pod/ovs-cs6lf node/ip-10-0-137-105.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): 07-17T14:40:24.422Z|00035|reconnect|WARN|unix#948: connection dropped (Broken pipe)\n2020-07-17T14:41:24.158Z|00174|bridge|INFO|bridge br0: deleted interface vetha689ec99 on port 14\n2020-07-17T14:41:24.158Z|00175|bridge|INFO|bridge br0: deleted interface veth2630c6fc on port 25\n2020-07-17T14:41:24.158Z|00176|bridge|INFO|bridge br0: deleted interface veth4335747e on port 29\n2020-07-17T14:41:24.158Z|00177|bridge|INFO|bridge br0: deleted interface vethf63db6f8 on port 4\n2020-07-17T14:41:24.159Z|00178|bridge|INFO|bridge br0: deleted interface tun0 on port 2\n2020-07-17T14:41:24.159Z|00179|bridge|INFO|bridge br0: deleted interface vethd20d574f on port 15\n2020-07-17T14:41:24.159Z|00180|bridge|INFO|bridge br0: deleted interface vethe3107a6c on port 16\n2020-07-17T14:41:24.159Z|00181|bridge|INFO|bridge br0: deleted interface vethd2288f75 on port 21\n2020-07-17T14:41:24.159Z|00182|bridge|INFO|bridge br0: deleted interface veth71aaa443 on port 22\n2020-07-17T14:41:24.159Z|00183|bridge|INFO|bridge br0: deleted interface veth17dbc60e on port 26\n2020-07-17T14:41:24.159Z|00184|bridge|INFO|bridge br0: deleted interface veth32a0ef32 on port 17\n2020-07-17T14:41:24.159Z|00185|bridge|INFO|bridge br0: deleted interface vethef8e4b9f on port 23\n2020-07-17T14:41:24.159Z|00186|bridge|INFO|bridge br0: deleted interface veth2d802c26 on port 18\n2020-07-17T14:41:24.159Z|00187|bridge|INFO|bridge br0: deleted interface vethcdbd28c0 on port 28\n2020-07-17T14:41:24.159Z|00188|bridge|INFO|bridge br0: deleted interface vethb58900a9 on port 27\n2020-07-17T14:41:24.159Z|00189|bridge|INFO|bridge br0: deleted interface br0 on port 65534\n2020-07-17T14:41:24.159Z|00190|bridge|INFO|bridge br0: deleted interface veth3d9f566f on port 24\n2020-07-17T14:41:24.159Z|00191|bridge|INFO|bridge br0: deleted interface vxlan0 on port 1\n2020-07-17T14:41:24.159Z|00192|bridge|INFO|bridge br0: deleted interface veth894ebc4e on port 13\n2020-07-17 14:41:24 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 17 14:41:31.379 E ns/openshift-sdn pod/sdn-njzrl node/ip-10-0-137-105.us-west-2.compute.internal container=sdn container exited with code 255 (Error): I0717 14:41:09.363343    1885 roundrobin.go:218] Delete endpoint 10.129.0.59:6443 for service "openshift-multus/multus-admission-controller:"\nI0717 14:41:09.383108    1885 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.59:6443 10.130.0.4:6443]\nI0717 14:41:09.383144    1885 roundrobin.go:218] Delete endpoint 10.128.0.17:6443 for service "openshift-multus/multus-admission-controller:"\nI0717 14:41:09.500293    1885 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:41:09.500317    1885 proxier.go:350] userspace syncProxyRules took 27.150249ms\nI0717 14:41:09.628112    1885 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:41:09.628134    1885 proxier.go:350] userspace syncProxyRules took 27.947979ms\nI0717 14:41:11.774537    1885 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-6750/service-test: to [10.129.2.16:80]\nI0717 14:41:11.774662    1885 roundrobin.go:218] Delete endpoint 10.128.2.20:80 for service "e2e-k8s-service-lb-available-6750/service-test:"\nI0717 14:41:11.906057    1885 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:41:11.906080    1885 proxier.go:350] userspace syncProxyRules took 26.925923ms\nI0717 14:41:16.768831    1885 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-6750/service-test: to [10.128.2.20:80 10.129.2.16:80]\nI0717 14:41:16.768865    1885 roundrobin.go:218] Delete endpoint 10.128.2.20:80 for service "e2e-k8s-service-lb-available-6750/service-test:"\nI0717 14:41:16.906586    1885 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:41:16.906618    1885 proxier.go:350] userspace syncProxyRules took 27.400338ms\nI0717 14:41:30.676169    1885 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0717 14:41:30.676212    1885 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 17 14:41:52.506 E ns/openshift-sdn pod/sdn-pq854 node/ip-10-0-142-167.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ble-6750/service-test: to [10.129.2.16:80]\nI0717 14:41:11.773759    2249 roundrobin.go:218] Delete endpoint 10.128.2.20:80 for service "e2e-k8s-service-lb-available-6750/service-test:"\nI0717 14:41:11.882362    2249 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:41:11.882380    2249 proxier.go:350] userspace syncProxyRules took 23.408181ms\nI0717 14:41:16.767949    2249 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-6750/service-test: to [10.128.2.20:80 10.129.2.16:80]\nI0717 14:41:16.767977    2249 roundrobin.go:218] Delete endpoint 10.128.2.20:80 for service "e2e-k8s-service-lb-available-6750/service-test:"\nI0717 14:41:16.878853    2249 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:41:16.878870    2249 proxier.go:350] userspace syncProxyRules took 22.86356ms\nI0717 14:41:33.770195    2249 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-6750/service-test: to [10.128.2.20:80]\nI0717 14:41:33.770246    2249 roundrobin.go:218] Delete endpoint 10.129.2.16:80 for service "e2e-k8s-service-lb-available-6750/service-test:"\nI0717 14:41:33.902432    2249 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:41:33.902455    2249 proxier.go:350] userspace syncProxyRules took 24.466715ms\nI0717 14:41:50.230855    2249 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-console/downloads:http to [10.131.0.18:8080]\nI0717 14:41:50.230901    2249 roundrobin.go:218] Delete endpoint 10.129.2.20:8080 for service "openshift-console/downloads:http"\nI0717 14:41:50.348886    2249 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:41:50.348903    2249 proxier.go:350] userspace syncProxyRules took 23.848242ms\nI0717 14:41:52.329470    2249 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0717 14:41:52.329502    2249 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 17 14:41:53.700 E ns/openshift-multus pod/multus-nslmd node/ip-10-0-130-128.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 17 14:42:10.757 E ns/openshift-sdn pod/ovs-9mjw6 node/ip-10-0-130-128.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): in the last 0 s (1 deletes)\n2020-07-17T14:41:29.260Z|00411|connmgr|INFO|br0<->unix#2081: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-17T14:41:29.314Z|00412|connmgr|INFO|br0<->unix#2085: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-17T14:41:29.317Z|00413|connmgr|INFO|br0<->unix#2087: 3 flow_mods in the last 0 s (3 adds)\n2020-07-17T14:41:29.341Z|00414|connmgr|INFO|br0<->unix#2091: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-17T14:41:29.345Z|00415|connmgr|INFO|br0<->unix#2093: 1 flow_mods in the last 0 s (1 adds)\n2020-07-17T14:41:29.370Z|00416|connmgr|INFO|br0<->unix#2096: 3 flow_mods in the last 0 s (3 adds)\n2020-07-17T14:41:29.399Z|00417|connmgr|INFO|br0<->unix#2099: 1 flow_mods in the last 0 s (1 adds)\n2020-07-17T14:41:29.421Z|00418|connmgr|INFO|br0<->unix#2102: 3 flow_mods in the last 0 s (3 adds)\n2020-07-17T14:41:29.447Z|00419|connmgr|INFO|br0<->unix#2105: 1 flow_mods in the last 0 s (1 adds)\n2020-07-17T14:41:29.469Z|00420|connmgr|INFO|br0<->unix#2108: 3 flow_mods in the last 0 s (3 adds)\n2020-07-17T14:41:29.489Z|00421|connmgr|INFO|br0<->unix#2111: 1 flow_mods in the last 0 s (1 adds)\n2020-07-17T14:41:29.511Z|00422|connmgr|INFO|br0<->unix#2114: 3 flow_mods in the last 0 s (3 adds)\n2020-07-17T14:41:29.532Z|00423|connmgr|INFO|br0<->unix#2117: 1 flow_mods in the last 0 s (1 adds)\n2020-07-17T14:41:39.653Z|00424|connmgr|INFO|br0<->unix#2126: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:41:39.680Z|00425|connmgr|INFO|br0<->unix#2129: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:41:39.700Z|00426|bridge|INFO|bridge br0: deleted interface veth54c6ca86 on port 18\n2020-07-17T14:41:48.490Z|00427|bridge|INFO|bridge br0: added interface veth53beea78 on port 68\n2020-07-17T14:41:48.517Z|00428|connmgr|INFO|br0<->unix#2137: 5 flow_mods in the last 0 s (5 adds)\n2020-07-17T14:41:48.553Z|00429|connmgr|INFO|br0<->unix#2140: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17 14:42:10 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 17 14:42:14.534 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 14:42:19.785 E ns/openshift-sdn pod/sdn-ds79l node/ip-10-0-130-128.us-west-2.compute.internal container=sdn container exited with code 255 (Error): :17.497252   79764 proxier.go:350] userspace syncProxyRules took 24.800744ms\nI0717 14:42:17.497281   79764 service.go:357] Adding new service port "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:" at 172.30.98.127:443/TCP\nI0717 14:42:17.536058   79764 roundrobin.go:298] LoadBalancerRR: Removing endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:\nI0717 14:42:17.621336   79764 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:42:17.621353   79764 proxier.go:350] userspace syncProxyRules took 23.329299ms\nI0717 14:42:17.621371   79764 service.go:382] Removing service port "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0717 14:42:17.736941   79764 roundrobin.go:236] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.129.0.58:5443 10.130.0.63:5443]\nI0717 14:42:17.736964   79764 roundrobin.go:218] Delete endpoint 10.129.0.58:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0717 14:42:17.736973   79764 roundrobin.go:218] Delete endpoint 10.130.0.63:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0717 14:42:17.741035   79764 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:42:17.741048   79764 proxier.go:350] userspace syncProxyRules took 27.495877ms\nI0717 14:42:17.741069   79764 service.go:357] Adding new service port "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:" at 172.30.41.50:443/TCP\nI0717 14:42:17.852690   79764 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:42:17.852709   79764 proxier.go:350] userspace syncProxyRules took 23.587826ms\nI0717 14:42:18.809289   79764 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0717 14:42:18.809325   79764 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 17 14:42:21.890 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-57866987ff-8bckc node/ip-10-0-130-128.us-west-2.compute.internal container=manager container exited with code 1 (Error): cret=openshift-network-operator/installer-cloud-credentials\ntime="2020-07-17T14:37:34Z" level=debug msg="updating credentials request status" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-07-17T14:37:34Z" level=debug msg="status unchanged" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-07-17T14:37:34Z" level=debug msg="syncing cluster operator status" controller=credreq_status\ntime="2020-07-17T14:37:34Z" level=debug msg="4 cred requests" controller=credreq_status\ntime="2020-07-17T14:37:34Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded\ntime="2020-07-17T14:37:34Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-07-17T14:37:34Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-07-17T14:37:34Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-07-17T14:37:34Z" level=info msg="Verified cloud creds can be used for minting new creds" controller=secretannotator\ntime="2020-07-17T14:39:34Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-07-17T14:39:34Z" level=info msg="reconcile complete" controller=metrics elapsed=1.424831ms\ntime="2020-07-17T14:41:34Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-07-17T14:41:34Z" level=info msg="reconcile complete" controller=metrics elapsed=1.022127ms\ntime="2020-07-17T14:42:21Z" level=error msg="leader election lostunable to run the manager"\n
Jul 17 14:42:45.538 E ns/openshift-sdn pod/sdn-kqndp node/ip-10-0-149-163.us-west-2.compute.internal container=sdn container exited with code 255 (Error): 717 14:42:22.795042   58517 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/controller-manager-service: to [10.128.0.58:443]\nI0717 14:42:22.795081   58517 roundrobin.go:218] Delete endpoint 10.128.0.58:443 for service "openshift-cloud-credential-operator/controller-manager-service:"\nI0717 14:42:22.795119   58517 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/cco-metrics:cco-metrics to [10.128.0.58:2112]\nI0717 14:42:22.795131   58517 roundrobin.go:218] Delete endpoint 10.128.0.58:2112 for service "openshift-cloud-credential-operator/cco-metrics:cco-metrics"\nI0717 14:42:22.921174   58517 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:42:22.921198   58517 proxier.go:350] userspace syncProxyRules took 27.424427ms\nI0717 14:42:23.055761   58517 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:42:23.055796   58517 proxier.go:350] userspace syncProxyRules took 29.926485ms\nI0717 14:42:39.018339   58517 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.67:6443 10.129.0.59:6443 10.130.0.68:6443]\nI0717 14:42:39.018373   58517 roundrobin.go:218] Delete endpoint 10.130.0.68:6443 for service "openshift-multus/multus-admission-controller:"\nI0717 14:42:39.147743   58517 proxier.go:371] userspace proxy: processing 0 service events\nI0717 14:42:39.147763   58517 proxier.go:350] userspace syncProxyRules took 27.036744ms\nI0717 14:42:40.937731   58517 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nE0717 14:42:40.937769   58517 metrics.go:133] failed to dump OVS flows for metrics: exit status 1\nI0717 14:42:41.033462   58517 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0717 14:42:45.099776   58517 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Jul 17 14:42:54.818 E ns/openshift-multus pod/multus-tmhml node/ip-10-0-142-167.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 17 14:43:45.707 E ns/openshift-multus pod/multus-d4t97 node/ip-10-0-149-163.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 17 14:44:37.487 E ns/openshift-multus pod/multus-p98tv node/ip-10-0-153-242.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 17 14:46:19.422 E ns/openshift-machine-config-operator pod/machine-config-operator-7b9fd8959b-vlj99 node/ip-10-0-130-128.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 215 (23395)\nW0717 14:37:25.589808       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.DaemonSet ended with: too old resource version: 19787 (22384)\nW0717 14:37:25.624855       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 23170 (23192)\nW0717 14:37:25.625058       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 19539 (22382)\nW0717 14:37:25.632352       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfig ended with: too old resource version: 17565 (24151)\nW0717 14:37:25.661686       1 reflector.go:299] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.CustomResourceDefinition ended with: too old resource version: 20764 (22382)\nW0717 14:37:25.664982       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 17580 (23784)\nW0717 14:37:25.667865       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 17581 (23898)\nW0717 14:37:26.664684       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 17584 (24153)\nW0717 14:37:26.699105       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 17572 (24168)\nW0717 14:37:26.803756       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 17583 (24177)\n
Jul 17 14:48:28.488 E ns/openshift-machine-config-operator pod/machine-config-daemon-7nghd node/ip-10-0-142-167.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 17 14:48:38.235 E ns/openshift-machine-config-operator pod/machine-config-daemon-tsgrm node/ip-10-0-149-163.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 17 14:48:52.029 E ns/openshift-machine-config-operator pod/machine-config-daemon-bwg2t node/ip-10-0-153-242.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 17 14:49:20.872 E ns/openshift-machine-config-operator pod/machine-config-daemon-8jzgk node/ip-10-0-130-128.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 17 14:49:40.147 E ns/openshift-machine-config-operator pod/machine-config-controller-78b977dcbd-9cfv9 node/ip-10-0-153-242.us-west-2.compute.internal container=machine-config-controller container exited with code 2 (Error): watch of *v1.ClusterVersion ended with: too old resource version: 24130 (24153)\nW0717 14:37:26.557611       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 17570 (24156)\nW0717 14:37:26.942382       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.KubeletConfig ended with: too old resource version: 17580 (24177)\nW0717 14:37:26.942850       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Scheduler ended with: too old resource version: 17576 (24177)\nW0717 14:37:26.943082       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 17583 (24177)\nI0717 14:37:27.597988       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool master\nI0717 14:37:27.615811       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool worker\nW0717 14:38:11.986230       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 24153 (25948)\nW0717 14:38:14.748047       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 25948 (26012)\nW0717 14:44:16.922386       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 29877 (30091)\nW0717 14:44:19.792592       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 30091 (30100)\n
Jul 17 14:51:39.215 E ns/openshift-machine-config-operator pod/machine-config-server-pmjt2 node/ip-10-0-130-128.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0717 14:12:40.237102       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-6-ga3a98da0-dirty (a3a98da0434ff1a3d5d6ad27df2237a91ebadf53)\nI0717 14:12:40.237851       1 api.go:56] Launching server on :22624\nI0717 14:12:40.237926       1 api.go:56] Launching server on :22623\n
Jul 17 14:51:41.929 E ns/openshift-machine-config-operator pod/machine-config-server-rlnq4 node/ip-10-0-142-167.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0717 14:12:39.538554       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-6-ga3a98da0-dirty (a3a98da0434ff1a3d5d6ad27df2237a91ebadf53)\nI0717 14:12:39.539415       1 api.go:56] Launching server on :22624\nI0717 14:12:39.539472       1 api.go:56] Launching server on :22623\nI0717 14:18:34.347084       1 api.go:102] Pool worker requested by 10.0.156.109:14393\nI0717 14:18:34.523849       1 api.go:102] Pool worker requested by 10.0.156.109:12716\n
Jul 17 14:51:49.526 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-659d4w2xk node/ip-10-0-153-242.us-west-2.compute.internal container=operator container exited with code 255 (Error): hub.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 38 items received\nI0717 14:51:05.300620       1 httplog.go:90] GET /metrics: (5.495129ms) 200 [Prometheus/2.14.0 10.128.2.28:48142]\nI0717 14:51:07.756460       1 httplog.go:90] GET /metrics: (5.080574ms) 200 [Prometheus/2.14.0 10.131.0.20:33056]\nI0717 14:51:29.167058       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 3 items received\nW0717 14:51:29.228984       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 32569 (32656)\nI0717 14:51:30.229818       1 reflector.go:158] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0717 14:51:31.921236       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 0 items received\nW0717 14:51:31.960575       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 32656 (32671)\nI0717 14:51:32.960754       1 reflector.go:158] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0717 14:51:35.300938       1 httplog.go:90] GET /metrics: (5.854052ms) 200 [Prometheus/2.14.0 10.128.2.28:48142]\nI0717 14:51:36.446835       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 1 items received\nI0717 14:51:37.752627       1 httplog.go:90] GET /metrics: (1.198445ms) 200 [Prometheus/2.14.0 10.131.0.20:33056]\nI0717 14:51:48.837485       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0717 14:51:48.837619       1 leaderelection.go:66] leaderelection lost\n
Jul 17 14:51:49.713 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-149-163.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/17 14:38:28 Watching directory: "/etc/alertmanager/config"\n
Jul 17 14:51:49.713 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-149-163.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/17 14:38:29 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/17 14:38:29 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/17 14:38:29 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/17 14:38:29 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/17 14:38:29 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/17 14:38:29 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/17 14:38:29 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/17 14:38:29 http.go:106: HTTPS: listening on [::]:9095\n
Jul 17 14:51:49.757 E ns/openshift-monitoring pod/thanos-querier-5d77486f4d-ncqgb node/ip-10-0-149-163.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/07/17 14:38:28 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/17 14:38:28 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/17 14:38:28 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/17 14:38:28 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/17 14:38:28 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/17 14:38:28 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/17 14:38:28 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/17 14:38:28 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/17 14:38:28 http.go:106: HTTPS: listening on [::]:9091\n
Jul 17 14:51:49.786 E ns/openshift-monitoring pod/telemeter-client-7487687875-nv9p5 node/ip-10-0-149-163.us-west-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Jul 17 14:51:49.786 E ns/openshift-monitoring pod/telemeter-client-7487687875-nv9p5 node/ip-10-0-149-163.us-west-2.compute.internal container=reload container exited with code 2 (Error): 
Jul 17 14:51:49.808 E ns/openshift-monitoring pod/prometheus-adapter-6d46784f5c-dzzpt node/ip-10-0-149-163.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0717 14:38:17.088888       1 adapter.go:93] successfully using in-cluster auth\nI0717 14:38:17.642820       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 17 14:51:53.725 E ns/openshift-machine-config-operator pod/machine-config-server-zb2rq node/ip-10-0-153-242.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0717 14:12:39.546114       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-6-ga3a98da0-dirty (a3a98da0434ff1a3d5d6ad27df2237a91ebadf53)\nI0717 14:12:39.546720       1 api.go:56] Launching server on :22624\nI0717 14:12:39.546757       1 api.go:56] Launching server on :22623\nI0717 14:18:36.208611       1 api.go:102] Pool worker requested by 10.0.156.109:14866\n
Jul 17 14:51:54.811 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-674ff6c9f8-gn65h node/ip-10-0-153-242.us-west-2.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): r-runtime/pkg/cache/internal/informers_map.go:204: watch of *v1.ClusterRoleBinding ended with: too old resource version: 25583 (25627)\nW0717 14:39:45.925716       1 reflector.go:299] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: watch of *v1.ClusterRole ended with: too old resource version: 25576 (25627)\nW0717 14:39:46.001351       1 reflector.go:299] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: watch of *v1.Tuned ended with: too old resource version: 25543 (27466)\nW0717 14:39:46.708532       1 reflector.go:299] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: watch of *v1.ConfigMap ended with: too old resource version: 25587 (27287)\nI0717 14:39:47.003679       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0717 14:39:47.003702       1 status.go:25] syncOperatorStatus()\nI0717 14:39:47.017768       1 tuned_controller.go:188] syncServiceAccount()\nI0717 14:39:47.017901       1 tuned_controller.go:215] syncClusterRole()\nI0717 14:39:47.085468       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0717 14:39:47.122630       1 tuned_controller.go:281] syncClusterConfigMap()\nI0717 14:39:47.126385       1 tuned_controller.go:281] syncClusterConfigMap()\nI0717 14:39:47.129914       1 tuned_controller.go:320] syncDaemonSet()\nI0717 14:47:57.400813       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0717 14:47:57.400843       1 status.go:25] syncOperatorStatus()\nI0717 14:47:57.409880       1 tuned_controller.go:188] syncServiceAccount()\nI0717 14:47:57.410023       1 tuned_controller.go:215] syncClusterRole()\nI0717 14:47:57.440004       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0717 14:47:57.468810       1 tuned_controller.go:281] syncClusterConfigMap()\nI0717 14:47:57.473717       1 tuned_controller.go:281] syncClusterConfigMap()\nI0717 14:47:57.477096       1 tuned_controller.go:320] syncDaemonSet()\nF0717 14:51:53.631735       1 main.go:82] <nil>\n
Jul 17 14:52:08.683 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-137-105.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-07-17T14:52:01.499Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-17T14:52:01.503Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-17T14:52:01.504Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-17T14:52:01.505Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-17T14:52:01.505Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-07-17T14:52:01.505Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-17T14:52:01.506Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-17T14:52:01.506Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-17T14:52:01.506Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-17T14:52:01.506Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-17T14:52:01.506Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-17T14:52:01.506Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-17T14:52:01.506Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-07-17T14:52:01.506Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-17T14:52:01.506Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-17T14:52:01.506Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-07-17
Jul 17 14:52:29.534 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 14:53:39.766 E ns/openshift-monitoring pod/node-exporter-kb828 node/ip-10-0-149-163.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-17T14:37:38Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-17T14:37:38Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 17 14:53:39.850 E ns/openshift-cluster-node-tuning-operator pod/tuned-4qwb4 node/ip-10-0-149-163.us-west-2.compute.internal container=tuned container exited with code 143 (Error): de wide: true\nI0717 14:43:49.537489   50210 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:43:49.538906   50210 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:43:49.650589   50210 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:43:53.532311   50210 openshift-tuned.go:852] Lowering resyncPeriod to 62\nI0717 14:48:26.319767   50210 openshift-tuned.go:550] Pod (openshift-dns/dns-default-l8jpt) labels changed node wide: true\nI0717 14:48:29.537542   50210 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:48:29.539525   50210 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:48:29.650289   50210 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:48:46.317025   50210 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-daemon-tsgrm) labels changed node wide: true\nI0717 14:48:49.537502   50210 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:48:49.538979   50210 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:48:49.648272   50210 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:51:50.616880   50210 openshift-tuned.go:550] Pod (openshift-console/downloads-756d8b7df6-c4skt) labels changed node wide: true\nI0717 14:51:54.537524   50210 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:51:54.539077   50210 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:51:54.651525   50210 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:51:56.370208   50210 openshift-tuned.go:550] Pod (openshift-image-registry/image-registry-59bf7b6876-lzg4q) labels changed node wide: true\n
Jul 17 14:53:39.895 E ns/openshift-sdn pod/ovs-cnfnx node/ip-10-0-149-163.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): s in the last 0 s (4 deletes)\n2020-07-17T14:51:48.568Z|00126|bridge|INFO|bridge br0: deleted interface vethc1d69ae7 on port 3\n2020-07-17T14:51:48.609Z|00127|connmgr|INFO|br0<->unix#485: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:51:48.658Z|00128|connmgr|INFO|br0<->unix#488: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:51:48.684Z|00129|bridge|INFO|bridge br0: deleted interface veth7824f958 on port 11\n2020-07-17T14:51:48.730Z|00130|connmgr|INFO|br0<->unix#491: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:51:48.785Z|00131|connmgr|INFO|br0<->unix#494: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:51:48.826Z|00132|bridge|INFO|bridge br0: deleted interface vethfe791043 on port 7\n2020-07-17T14:51:48.881Z|00133|connmgr|INFO|br0<->unix#497: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:51:48.927Z|00134|connmgr|INFO|br0<->unix#500: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:51:48.956Z|00135|bridge|INFO|bridge br0: deleted interface vetha30a4bf1 on port 12\n2020-07-17T14:51:49.000Z|00136|connmgr|INFO|br0<->unix#503: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:51:49.044Z|00137|connmgr|INFO|br0<->unix#506: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:51:49.092Z|00138|bridge|INFO|bridge br0: deleted interface veth3564c4b6 on port 5\n2020-07-17T14:51:49.131Z|00139|connmgr|INFO|br0<->unix#509: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:51:49.195Z|00140|connmgr|INFO|br0<->unix#512: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:51:49.229Z|00141|bridge|INFO|bridge br0: deleted interface veth96e9bff0 on port 9\n2020-07-17T14:51:49.269Z|00142|connmgr|INFO|br0<->unix#515: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:51:49.301Z|00143|connmgr|INFO|br0<->unix#518: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:51:49.322Z|00144|bridge|INFO|bridge br0: deleted interface vethbd0aec7d on port 8\n2020-07-17 14:51:57 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 17 14:53:39.906 E ns/openshift-multus pod/multus-zsw2b node/ip-10-0-149-163.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Jul 17 14:53:39.971 E ns/openshift-machine-config-operator pod/machine-config-daemon-g5chq node/ip-10-0-149-163.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 17 14:53:43.443 E ns/openshift-monitoring pod/node-exporter-kb828 node/ip-10-0-149-163.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 17 14:53:43.510 E ns/openshift-multus pod/multus-zsw2b node/ip-10-0-149-163.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 17 14:53:45.337 E ns/openshift-multus pod/multus-zsw2b node/ip-10-0-149-163.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 17 14:53:48.201 E ns/openshift-machine-config-operator pod/machine-config-daemon-g5chq node/ip-10-0-149-163.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 17 14:53:52.761 E ns/openshift-marketplace pod/community-operators-85bbc7fbc6-9gqhd node/ip-10-0-128-139.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jul 17 14:53:56.875 E ns/openshift-monitoring pod/openshift-state-metrics-f6bcd4fc-xljvt node/ip-10-0-128-139.us-west-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jul 17 14:53:58.008 E ns/openshift-marketplace pod/community-operators-667756b87f-dchz2 node/ip-10-0-128-139.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jul 17 14:53:58.019 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-139.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/07/17 14:38:51 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jul 17 14:53:58.019 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-139.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/07/17 14:38:52 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/17 14:38:52 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/17 14:38:52 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/17 14:38:52 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/17 14:38:52 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/17 14:38:52 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/17 14:38:52 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/17 14:38:52 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/17 14:38:52 http.go:106: HTTPS: listening on [::]:9091\n2020/07/17 14:42:50 reverseproxy.go:447: http: proxy error: context canceled\n2020/07/17 14:52:02 oauthproxy.go:774: basicauth: 10.129.2.30:49054 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/17 14:52:06 oauthproxy.go:774: basicauth: 10.129.0.70:42580 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 17 14:53:58.019 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-139.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-17T14:38:51.752598636Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-07-17T14:38:51.752736965Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-07-17T14:38:51.75486878Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-17T14:38:56.926507162Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jul 17 14:54:10.104 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-149-163.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-17T14:54:06.773Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-07-17T14:54:06.788Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-17T14:54:06.793Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-17T14:54:06.794Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-17T14:54:06.794Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-07-17T14:54:06.794Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-17T14:54:06.794Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-17T14:54:06.794Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-17T14:54:06.794Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-17T14:54:06.794Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-17T14:54:06.794Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-17T14:54:06.794Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-17T14:54:06.794Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-07-17T14:54:06.794Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-17T14:54:06.795Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-17T14:54:06.795Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-07-17
Jul 17 14:54:25.111 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Jul 17 14:55:03.044 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-153-242.us-west-2.compute.internal" not ready since 2020-07-17 14:55:02 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)
Jul 17 14:55:03.051 E clusteroperator/kube-controller-manager changed Degraded to True: NodeControllerDegradedMasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-153-242.us-west-2.compute.internal" not ready since 2020-07-17 14:55:02 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)
Jul 17 14:55:03.051 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-153-242.us-west-2.compute.internal" not ready since 2020-07-17 14:55:02 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)
Jul 17 14:55:03.186 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-153-242.us-west-2.compute.internal node/ip-10-0-153-242.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): ersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0717 14:40:22.302301       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)\nE0717 14:40:22.317190       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: unknown (get pods)\nE0717 14:40:22.317249       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0717 14:40:22.317190       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0717 14:40:22.317305       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nW0717 14:40:22.321510       1 reflector.go:299] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: too old resource version: 24143 (27886)\nW0717 14:40:22.321565       1 reflector.go:299] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: too old resource version: 24143 (27886)\nE0717 14:40:22.324505       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0717 14:40:22.324635       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0717 14:40:22.352599       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0717 14:40:22.357045       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)\nE0717 14:40:22.363672       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: unknown (get csinodes.storage.k8s.io)\n
Jul 17 14:55:03.280 E ns/openshift-monitoring pod/node-exporter-87xnx node/ip-10-0-153-242.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-17T14:38:37Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:37Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 17 14:55:03.289 E ns/openshift-controller-manager pod/controller-manager-sb695 node/ip-10-0-153-242.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 17 14:55:03.304 E ns/openshift-sdn pod/ovs-4cqkg node/ip-10-0-153-242.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): Z|00018|jsonrpc|WARN|Dropped 3 log messages in last 662 seconds (most recently, 662 seconds ago) due to excessive rate\n2020-07-17T14:51:53.532Z|00019|jsonrpc|WARN|unix#655: send error: Broken pipe\n2020-07-17T14:51:53.532Z|00020|reconnect|WARN|unix#655: connection dropped (Broken pipe)\n2020-07-17T14:51:54.271Z|00240|connmgr|INFO|br0<->unix#762: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:51:54.308Z|00241|connmgr|INFO|br0<->unix#765: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:51:54.332Z|00242|bridge|INFO|bridge br0: deleted interface veth8178bcaf on port 7\n2020-07-17T14:51:54.495Z|00243|connmgr|INFO|br0<->unix#768: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:51:54.533Z|00244|connmgr|INFO|br0<->unix#771: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:51:54.554Z|00245|bridge|INFO|bridge br0: deleted interface veth4de64c17 on port 16\n2020-07-17T14:51:54.823Z|00246|connmgr|INFO|br0<->unix#774: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:51:54.855Z|00247|connmgr|INFO|br0<->unix#777: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:51:54.878Z|00248|bridge|INFO|bridge br0: deleted interface veth77ae8ccf on port 9\n2020-07-17T14:51:54.872Z|00021|jsonrpc|WARN|unix#680: receive error: Connection reset by peer\n2020-07-17T14:51:54.872Z|00022|reconnect|WARN|unix#680: connection dropped (Connection reset by peer)\n2020-07-17T14:52:18.064Z|00249|connmgr|INFO|br0<->unix#795: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:52:18.089Z|00250|connmgr|INFO|br0<->unix#798: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:52:18.109Z|00251|bridge|INFO|bridge br0: deleted interface veth83fd90b9 on port 5\n2020-07-17 14:52:22 info: Saving flows ...\n2020-07-17T14:52:22Z|00001|jsonrpc|WARN|unix:/var/run/openvswitch/db.sock: receive error: Connection reset by peer\n2020-07-17T14:52:22Z|00002|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection dropped (Connection reset by peer)\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection reset by peer)\n
Jul 17 14:55:03.333 E ns/openshift-sdn pod/sdn-controller-zmhbh node/ip-10-0-153-242.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0717 14:40:50.696999       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0717 14:40:50.715748       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"770b23b3-596c-4e1f-bc61-ded59867e199", ResourceVersion:"28267", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730591892, loc:(*time.Location)(0x2b7dcc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-153-242\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-07-17T14:11:32Z\",\"renewTime\":\"2020-07-17T14:40:50Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-153-242 became leader'\nI0717 14:40:50.715819       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0717 14:40:50.723456       1 master.go:51] Initializing SDN master\nI0717 14:40:50.736652       1 network_controller.go:60] Started OpenShift Network Controller\n
Jul 17 14:55:03.343 E ns/openshift-multus pod/multus-admission-controller-b5gkm node/ip-10-0-153-242.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jul 17 14:55:03.357 E ns/openshift-multus pod/multus-zqhz9 node/ip-10-0-153-242.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Jul 17 14:55:03.377 E ns/openshift-machine-config-operator pod/machine-config-server-9q56l node/ip-10-0-153-242.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0717 14:52:03.041224       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-6-ga3a98da0-dirty (a3a98da0434ff1a3d5d6ad27df2237a91ebadf53)\nI0717 14:52:03.042670       1 api.go:56] Launching server on :22624\nI0717 14:52:03.042988       1 api.go:56] Launching server on :22623\n
Jul 17 14:55:03.387 E ns/openshift-cluster-node-tuning-operator pod/tuned-qsbhq node/ip-10-0-153-242.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 717 14:52:18.347380  100486 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:52:18.348881  100486 openshift-tuned.go:390] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0717 14:52:18.349855  100486 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:52:18.459733  100486 openshift-tuned.go:635] Active profile () != recommended profile (openshift-control-plane)\nI0717 14:52:18.459765  100486 openshift-tuned.go:263] Starting tuned...\n2020-07-17 14:52:18,560 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-07-17 14:52:18,566 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-07-17 14:52:18,567 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-07-17 14:52:18,568 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-07-17 14:52:18,569 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-07-17 14:52:18,608 INFO     tuned.daemon.controller: starting controller\n2020-07-17 14:52:18,608 INFO     tuned.daemon.daemon: starting tuning\n2020-07-17 14:52:18,614 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-07-17 14:52:18,615 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-07-17 14:52:18,619 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-07-17 14:52:18,621 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-07-17 14:52:18,623 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-07-17 14:52:18,697 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-07-17 14:52:18,705 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0717 14:52:21.980340  100486 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-89c69dd88-n69q5) labels changed node wide: true\n
Jul 17 14:55:03.415 E ns/openshift-machine-config-operator pod/machine-config-daemon-wk98c node/ip-10-0-153-242.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 17 14:55:08.066 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-242.us-west-2.compute.internal node/ip-10-0-153-242.us-west-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): red revision has been compacted\nE0717 14:52:22.278700       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:52:22.278744       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:52:22.278788       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:52:22.278856       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:52:22.278885       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:52:22.278923       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:52:22.278978       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:52:22.279119       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:52:22.279119       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:52:22.313974       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:52:22.313974       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:52:22.314131       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:52:22.314274       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0717 14:52:22.593136       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-153-242.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0717 14:52:22.593351       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Jul 17 14:55:08.066 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-242.us-west-2.compute.internal node/ip-10-0-153-242.us-west-2.compute.internal container=kube-apiserver-insecure-readyz-6 container exited with code 2 (Error): I0717 14:38:38.511839       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Jul 17 14:55:08.066 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-242.us-west-2.compute.internal node/ip-10-0-153-242.us-west-2.compute.internal container=kube-apiserver-cert-syncer-6 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0717 14:50:23.371662       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:50:23.371955       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0717 14:50:23.577242       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:50:23.578065       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Jul 17 14:55:09.373 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-242.us-west-2.compute.internal node/ip-10-0-153-242.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 1 (Error): I0717 14:38:22.196171       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0717 14:38:22.198787       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0717 14:38:22.199650       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0717 14:40:00.623870       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0717 14:40:22.232082       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: configmaps "cluster-policy-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\n
Jul 17 14:55:09.373 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-242.us-west-2.compute.internal node/ip-10-0-153-242.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:51:11.628336       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:51:11.628714       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:51:21.634485       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:51:21.635234       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:51:31.642989       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:51:31.643368       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:51:41.651737       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:51:41.651972       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:51:51.659333       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:51:51.659641       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:52:01.668460       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:52:01.668807       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:52:11.681972       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:52:11.682265       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:52:21.690775       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:52:21.691032       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Jul 17 14:55:09.373 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-242.us-west-2.compute.internal node/ip-10-0-153-242.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 2 (Error): :01.092081       1 webhook.go:107] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0717 14:40:01.092120       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0717 14:40:04.530174       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0717 14:40:09.450615       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0717 14:40:14.261370       1 webhook.go:107] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0717 14:40:14.261405       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0717 14:40:14.762984       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0717 14:40:22.198980       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
Jul 17 14:55:09.510 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Jul 17 14:55:10.345 E ns/openshift-multus pod/multus-zqhz9 node/ip-10-0-153-242.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 17 14:55:14.625 E ns/openshift-machine-config-operator pod/machine-config-daemon-wk98c node/ip-10-0-153-242.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 17 14:55:32.061 E ns/openshift-machine-api pod/machine-api-controllers-555c95bfb4-z6zl9 node/ip-10-0-130-128.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 17 14:55:32.529 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-7d68c8c964-pbj55 node/ip-10-0-130-128.us-west-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\\n\"\nStaticPodsDegraded: nodes/ip-10-0-153-242.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-153-242.us-west-2.compute.internal container=\"kube-apiserver-insecure-readyz-6\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-153-242.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-153-242.us-west-2.compute.internal container=\"kube-apiserver-insecure-readyz-6\" is terminated: \"Error\" - \"I0717 14:38:38.511839       1 readyz.go:103] Listening on 0.0.0.0:6080\\n\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-153-242.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-153-242.us-west-2.compute.internal container=\"kube-apiserver-6\" is not ready"\nI0717 14:55:30.183121       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"bbddb41b-fda3-429d-bda5-3841776390de", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-153-242.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-153-242.us-west-2.compute.internal container=\"kube-apiserver-6\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0717 14:55:30.852764       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0717 14:55:30.852843       1 leaderelection.go:66] leaderelection lost\n
Jul 17 14:55:33.413 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-74c7f45766-87v5b node/ip-10-0-130-128.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): manager-6\" is not ready"\nI0717 14:55:27.468346       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"e9f9d4ae-bc92-4c75-aa00-d69582b5189c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-153-242.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-153-242.us-west-2.compute.internal container=\"cluster-policy-controller-6\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-153-242.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-153-242.us-west-2.compute.internal container=\"kube-controller-manager-6\" is not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-153-242.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-153-242.us-west-2.compute.internal container=\"kube-controller-manager-6\" is not ready"\nI0717 14:55:30.467808       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"e9f9d4ae-bc92-4c75-aa00-d69582b5189c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-153-242.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-153-242.us-west-2.compute.internal container=\"kube-controller-manager-6\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0717 14:55:30.790010       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0717 14:55:30.790136       1 leaderelection.go:66] leaderelection lost\n
Jul 17 14:55:35.068 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-7d6b469c99-nm92k node/ip-10-0-130-128.us-west-2.compute.internal container=operator container exited with code 255 (Error): erator/openshift-cluster-svcat-apiserver-operator-lock\nI0717 14:54:52.891159       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0717 14:54:52.891185       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0717 14:54:52.892440       1 httplog.go:90] GET /metrics: (1.367971ms) 200 [Prometheus/2.14.0 10.131.0.13:58078]\nI0717 14:55:00.599958       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0717 14:55:10.608189       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0717 14:55:16.867333       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0717 14:55:16.867354       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0717 14:55:16.868577       1 httplog.go:90] GET /metrics: (4.69295ms) 200 [Prometheus/2.14.0 10.129.2.32:56346]\nI0717 14:55:20.619068       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0717 14:55:22.891054       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0717 14:55:22.891076       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0717 14:55:22.892251       1 httplog.go:90] GET /metrics: (1.298465ms) 200 [Prometheus/2.14.0 10.131.0.13:58078]\nI0717 14:55:30.660086       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0717 14:55:32.264565       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0717 14:55:32.264928       1 leaderelection.go:66] leaderelection lost\n
Jul 17 14:55:59.673 E kube-apiserver failed contacting the API: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=34066&timeout=8m37s&timeoutSeconds=517&watch=true: dial tcp 35.155.23.131:6443: connect: connection refused
Jul 17 14:56:04.535 E kube-apiserver Kube API started failing: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jul 17 14:56:14.534 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 14:56:42.653 E ns/openshift-monitoring pod/node-exporter-q2b8x node/ip-10-0-128-139.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-17T14:37:57Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-17T14:37:57Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 17 14:56:42.698 E ns/openshift-multus pod/multus-mk2sn node/ip-10-0-128-139.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Jul 17 14:56:42.734 E ns/openshift-sdn pod/ovs-hj7d4 node/ip-10-0-128-139.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error):  br0: deleted interface vethb762205e on port 4\n2020-07-17T14:53:57.401Z|00183|connmgr|INFO|br0<->unix#753: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:53:57.452Z|00184|connmgr|INFO|br0<->unix#756: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:53:57.482Z|00185|bridge|INFO|bridge br0: deleted interface veth2516e662 on port 20\n2020-07-17T14:53:57.545Z|00186|connmgr|INFO|br0<->unix#759: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:53:57.579Z|00187|connmgr|INFO|br0<->unix#762: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:53:57.609Z|00188|bridge|INFO|bridge br0: deleted interface veth10611bbe on port 19\n2020-07-17T14:53:57.648Z|00189|connmgr|INFO|br0<->unix#765: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:53:57.685Z|00190|connmgr|INFO|br0<->unix#768: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:53:57.709Z|00191|bridge|INFO|bridge br0: deleted interface veth925bbe17 on port 13\n2020-07-17T14:54:11.261Z|00192|connmgr|INFO|br0<->unix#780: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:54:11.288Z|00193|connmgr|INFO|br0<->unix#783: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:54:11.310Z|00194|bridge|INFO|bridge br0: deleted interface vethe99ffeba on port 21\n2020-07-17T14:54:41.468Z|00026|jsonrpc|WARN|unix#723: receive error: Connection reset by peer\n2020-07-17T14:54:41.468Z|00027|reconnect|WARN|unix#723: connection dropped (Connection reset by peer)\n2020-07-17T14:54:41.472Z|00028|jsonrpc|WARN|unix#724: receive error: Connection reset by peer\n2020-07-17T14:54:41.472Z|00029|reconnect|WARN|unix#724: connection dropped (Connection reset by peer)\n2020-07-17T14:54:41.431Z|00195|connmgr|INFO|br0<->unix#807: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:54:41.457Z|00196|connmgr|INFO|br0<->unix#810: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:54:41.478Z|00197|bridge|INFO|bridge br0: deleted interface veth8c3c8e1e on port 5\n info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 17 14:56:42.756 E ns/openshift-machine-config-operator pod/machine-config-daemon-vpnm8 node/ip-10-0-128-139.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 17 14:56:42.776 E ns/openshift-cluster-node-tuning-operator pod/tuned-nf6sk node/ip-10-0-128-139.us-west-2.compute.internal container=tuned container exited with code 143 (Error): noring CPU energy performance bias\n2020-07-17 14:52:32,854 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-07-17 14:52:32,856 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-07-17 14:52:32,972 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-07-17 14:52:32,974 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0717 14:53:32.715445  118136 openshift-tuned.go:550] Pod (openshift-marketplace/redhat-operators-586fd5b5f8-pztcr) labels changed node wide: true\nI0717 14:53:37.556253  118136 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:53:37.557786  118136 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:53:37.787132  118136 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:54:02.714957  118136 openshift-tuned.go:550] Pod (openshift-monitoring/openshift-state-metrics-f6bcd4fc-xljvt) labels changed node wide: true\nI0717 14:54:07.555110  118136 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:54:07.556470  118136 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:54:07.670617  118136 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:54:22.713996  118136 openshift-tuned.go:550] Pod (openshift-marketplace/redhat-operators-77999f8c85-cfvv2) labels changed node wide: true\nI0717 14:54:27.555090  118136 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:54:27.556469  118136 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:54:27.670184  118136 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:54:52.712798  118136 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-6750/service-test-dmhqt) labels changed node wide: true\n
Jul 17 14:56:46.339 E ns/openshift-multus pod/multus-mk2sn node/ip-10-0-128-139.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 17 14:56:48.338 E ns/openshift-multus pod/multus-mk2sn node/ip-10-0-128-139.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 17 14:56:51.344 E ns/openshift-machine-config-operator pod/machine-config-daemon-vpnm8 node/ip-10-0-128-139.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 17 14:57:14.534 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 14:57:22.992 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Jul 17 14:57:33.814 E ns/openshift-monitoring pod/thanos-querier-5d77486f4d-77pn5 node/ip-10-0-137-105.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/07/17 14:51:56 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/17 14:51:56 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/17 14:51:56 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/17 14:51:56 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/17 14:51:56 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/17 14:51:56 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/17 14:51:56 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/17 14:51:56 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/17 14:51:56 http.go:106: HTTPS: listening on [::]:9091\n
Jul 17 14:57:34.821 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-137-105.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/17 14:38:40 Watching directory: "/etc/alertmanager/config"\n
Jul 17 14:57:34.821 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-137-105.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/17 14:38:40 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/17 14:38:40 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/17 14:38:40 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/17 14:38:40 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/17 14:38:40 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/17 14:38:40 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/17 14:38:40 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/17 14:38:40 http.go:106: HTTPS: listening on [::]:9095\n2020/07/17 14:41:15 reverseproxy.go:447: http: proxy error: context canceled\n2020/07/17 14:42:02 reverseproxy.go:447: http: proxy error: context canceled\n
Jul 17 14:57:34.884 E ns/openshift-monitoring pod/kube-state-metrics-58767b99bd-9vvtc node/ip-10-0-137-105.us-west-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Jul 17 14:57:34.905 E ns/openshift-monitoring pod/prometheus-adapter-6d46784f5c-nbfk2 node/ip-10-0-137-105.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0717 14:38:05.101023       1 adapter.go:93] successfully using in-cluster auth\nI0717 14:38:06.429421       1 secure_serving.go:116] Serving securely on [::]:6443\nW0717 14:55:59.434058       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Pod ended with: very short watch: k8s.io/client-go/informers/factory.go:133: Unexpected watch close - watch lasted less than a second and no items received\n
Jul 17 14:57:34.919 E ns/openshift-ingress pod/router-default-894dc5796-mcvwj node/ip-10-0-137-105.us-west-2.compute.internal container=router container exited with code 2 (Error): router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:55:58.218239       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nW0717 14:55:59.433721       1 reflector.go:299] github.com/openshift/router/pkg/router/controller/factory/factory.go:115: watch of *v1.Endpoints ended with: very short watch: github.com/openshift/router/pkg/router/controller/factory/factory.go:115: Unexpected watch close - watch lasted less than a second and no items received\nI0717 14:56:03.215117       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:56:18.778718       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:57:05.411745       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:57:10.408167       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:57:18.147659       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:57:23.142749       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0717 14:57:32.418481       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Jul 17 14:57:34.935 E ns/openshift-monitoring pod/telemeter-client-7487687875-tgqr2 node/ip-10-0-137-105.us-west-2.compute.internal container=reload container exited with code 2 (Error): 
Jul 17 14:57:34.935 E ns/openshift-monitoring pod/telemeter-client-7487687875-tgqr2 node/ip-10-0-137-105.us-west-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Jul 17 14:58:14.782 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-128.us-west-2.compute.internal node/ip-10-0-130-128.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 1 (Error): ceAccount ended with: too old resource version: 22382 (34067)\nW0717 14:52:23.309824       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ReplicationController ended with: too old resource version: 29019 (34067)\nW0717 14:52:23.324235       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 22382 (34067)\nW0717 14:52:23.331336       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.LimitRange ended with: too old resource version: 22382 (34067)\nW0717 14:52:23.341474       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.RoleBinding ended with: too old resource version: 22383 (34067)\nI0717 14:52:49.803060       1 trace.go:81] Trace[658032027]: "Reflector github.com/openshift/client-go/image/informers/externalversions/factory.go:101 ListAndWatch" (started: 2020-07-17 14:52:23.28603255 +0000 UTC m=+1000.204813298) (total time: 26.516984618s):\nTrace[658032027]: [26.516841865s] [26.516841865s] Objects listed\nE0717 14:52:52.681618       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io)\nE0717 14:52:52.681788       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0717 14:53:19.805844       1 reflector.go:270] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io)\nW0717 14:55:58.628053       1 reflector.go:289] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: The resourceVersion for the provided watch is too old.\n
Jul 17 14:58:14.782 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-128.us-west-2.compute.internal node/ip-10-0-130-128.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:55:21.814667       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:55:21.814989       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:55:31.924210       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:55:31.925313       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:55:39.299317       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:55:39.299633       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:55:39.299926       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:55:39.300089       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:55:39.326510       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:55:39.326771       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:55:39.327076       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:55:39.327241       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:55:41.930930       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:55:41.931292       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:55:51.952563       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:55:51.952850       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Jul 17 14:58:14.782 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-128.us-west-2.compute.internal node/ip-10-0-130-128.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 2 (Error):  'ScalingReplicaSet' Scaled up replica set packageserver-754564d9b6 to 2\nI0717 14:55:49.167048       1 deployment_controller.go:484] Error syncing deployment openshift-operator-lifecycle-manager/packageserver: Operation cannot be fulfilled on deployments.apps "packageserver": the object has been modified; please apply your changes to the latest version and try again\nI0717 14:55:49.187514       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-754564d9b6", UID:"0d761938-9da0-4440-a5ab-b6e4a8b95d7b", APIVersion:"apps/v1", ResourceVersion:"37064", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-754564d9b6-qj4rr\nI0717 14:55:55.057542       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/telemeter-client: Operation cannot be fulfilled on deployments.apps "telemeter-client": the object has been modified; please apply your changes to the latest version and try again\nI0717 14:55:58.056501       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/thanos-querier: Operation cannot be fulfilled on deployments.apps "thanos-querier": the object has been modified; please apply your changes to the latest version and try again\nI0717 14:55:58.707326       1 endpoints_controller.go:340] Error syncing endpoints for service "openshift-etcd/etcd", retrying. Error: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\nI0717 14:55:58.707458       1 event.go:255] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openshift-etcd", Name:"etcd", UID:"f6b11f30-8aae-4b06-b3e5-ef7ad77f4074", APIVersion:"v1", ResourceVersion:"37158", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint openshift-etcd/etcd: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\n
Jul 17 14:58:14.842 E ns/openshift-controller-manager pod/controller-manager-lnkdx node/ip-10-0-130-128.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 17 14:58:14.862 E ns/openshift-monitoring pod/node-exporter-2c5g4 node/ip-10-0-130-128.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-17T14:38:05Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:05Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 17 14:58:14.899 E ns/openshift-sdn pod/sdn-controller-kqmd6 node/ip-10-0-130-128.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0717 14:40:36.600669       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Jul 17 14:58:14.909 E ns/openshift-multus pod/multus-admission-controller-wlzvb node/ip-10-0-130-128.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jul 17 14:58:14.924 E ns/openshift-multus pod/multus-6qg2v node/ip-10-0-130-128.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Jul 17 14:58:14.936 E ns/openshift-sdn pod/ovs-87vhz node/ip-10-0-130-128.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): 0 s (2 deletes)\n2020-07-17T14:55:33.316Z|00262|connmgr|INFO|br0<->unix#879: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:55:33.351Z|00263|bridge|INFO|bridge br0: deleted interface vethebb85274 on port 31\n2020-07-17T14:55:33.402Z|00264|connmgr|INFO|br0<->unix#882: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:55:33.443Z|00265|connmgr|INFO|br0<->unix#885: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:55:33.585Z|00266|bridge|INFO|bridge br0: deleted interface veth0584f44d on port 8\n2020-07-17T14:55:33.650Z|00267|connmgr|INFO|br0<->unix#888: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:55:33.709Z|00268|connmgr|INFO|br0<->unix#891: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:55:33.766Z|00269|bridge|INFO|bridge br0: deleted interface veth37c4df8e on port 12\n2020-07-17T14:55:33.847Z|00270|connmgr|INFO|br0<->unix#894: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:55:33.993Z|00271|connmgr|INFO|br0<->unix#897: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:55:34.075Z|00272|bridge|INFO|bridge br0: deleted interface veth2b958005 on port 28\n2020-07-17T14:55:34.183Z|00273|connmgr|INFO|br0<->unix#900: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:55:34.240Z|00274|connmgr|INFO|br0<->unix#903: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:55:34.283Z|00275|bridge|INFO|bridge br0: deleted interface vetheea912ed on port 24\n2020-07-17T14:55:34.438Z|00276|connmgr|INFO|br0<->unix#906: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:55:34.504Z|00277|connmgr|INFO|br0<->unix#909: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:55:34.555Z|00278|bridge|INFO|bridge br0: deleted interface veth48a12269 on port 26\n2020-07-17T14:55:51.286Z|00279|connmgr|INFO|br0<->unix#924: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:55:51.312Z|00280|connmgr|INFO|br0<->unix#927: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:55:51.336Z|00281|bridge|INFO|bridge br0: deleted interface veth790d6c9f on port 16\n2020-07-17 14:55:58 info: Saving flows ...\nTerminated\n
Jul 17 14:58:14.957 E ns/openshift-machine-config-operator pod/machine-config-daemon-jmksx node/ip-10-0-130-128.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 17 14:58:14.968 E ns/openshift-machine-config-operator pod/machine-config-server-c56lc node/ip-10-0-130-128.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0717 14:51:40.999944       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-6-ga3a98da0-dirty (a3a98da0434ff1a3d5d6ad27df2237a91ebadf53)\nI0717 14:51:41.001171       1 api.go:56] Launching server on :22624\nI0717 14:51:41.001214       1 api.go:56] Launching server on :22623\n
Jul 17 14:58:14.978 E ns/openshift-cluster-node-tuning-operator pod/tuned-5tpw5 node/ip-10-0-130-128.us-west-2.compute.internal container=tuned container exited with code 143 (Error): d (openshift-kube-controller-manager/revision-pruner-5-ip-10-0-130-128.us-west-2.compute.internal) labels changed node wide: false\nI0717 14:55:31.006049  101870 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-6-ip-10-0-130-128.us-west-2.compute.internal) labels changed node wide: true\nI0717 14:55:33.931626  101870 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:55:33.933372  101870 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:55:34.446796  101870 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0717 14:55:34.454269  101870 openshift-tuned.go:550] Pod (openshift-ingress-operator/ingress-operator-5fb7994bc6-drpcm) labels changed node wide: true\nI0717 14:55:38.929918  101870 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:55:38.931157  101870 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:55:39.062867  101870 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0717 14:55:47.916436  101870 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-controller-7b69799875-47ll7) labels changed node wide: true\nI0717 14:55:48.929928  101870 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:55:48.931321  101870 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:55:49.032905  101870 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0717 14:55:57.781155  101870 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-7-ip-10-0-130-128.us-west-2.compute.internal) labels changed node wide: true\nI0717 14:55:58.865057  101870 openshift-tuned.go:137] Received signal: terminated\nI0717 14:55:58.865129  101870 openshift-tuned.go:304] Sending TERM to PID 102189\n
Jul 17 14:58:18.641 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-128.us-west-2.compute.internal node/ip-10-0-130-128.us-west-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): red revision has been compacted\nE0717 14:55:58.643722       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:55:58.643811       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:55:58.644580       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:55:58.644830       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:55:58.644964       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:55:58.647990       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:55:58.648106       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:55:58.648131       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:55:58.648234       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:55:58.648255       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:55:58.648488       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:55:58.648535       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0717 14:55:58.648641       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0717 14:55:59.077828       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-130-128.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0717 14:55:59.079341       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Jul 17 14:58:18.641 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-128.us-west-2.compute.internal node/ip-10-0-130-128.us-west-2.compute.internal container=kube-apiserver-insecure-readyz-6 container exited with code 2 (Error): I0717 14:33:52.465980       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Jul 17 14:58:18.641 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-128.us-west-2.compute.internal node/ip-10-0-130-128.us-west-2.compute.internal container=kube-apiserver-cert-syncer-6 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0717 14:55:39.319358       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:55:39.320194       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0717 14:55:39.525962       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:55:39.526199       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Jul 17 14:58:19.657 E ns/openshift-multus pod/multus-6qg2v node/ip-10-0-130-128.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 17 14:58:19.707 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-130-128.us-west-2.compute.internal node/ip-10-0-130-128.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): ting\nI0717 14:55:43.669515       1 scheduler.go:667] pod openshift-marketplace/certified-operators-cfb6fdf9-6ffr9 is bound successfully on node "ip-10-0-149-163.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419376Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15268400Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0717 14:55:43.857640       1 scheduler.go:667] pod openshift-marketplace/community-operators-687cb59ffd-sd4pp is bound successfully on node "ip-10-0-149-163.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419376Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15268400Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0717 14:55:44.034013       1 scheduler.go:667] pod openshift-marketplace/redhat-operators-699b487974-d7vf7 is bound successfully on node "ip-10-0-149-163.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419376Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15268400Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0717 14:55:49.034327       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7759f86b7-69vfc: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0717 14:55:49.197155       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-754564d9b6-qj4rr is bound successfully on node "ip-10-0-142-167.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<15946292Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<14795316Ki>|Pods<250>|StorageEphemeral<114381692328>.".\n
Jul 17 14:58:19.759 E ns/openshift-monitoring pod/node-exporter-2c5g4 node/ip-10-0-130-128.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 17 14:58:21.892 E ns/openshift-multus pod/multus-6qg2v node/ip-10-0-130-128.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 17 14:58:27.054 E ns/openshift-machine-config-operator pod/machine-config-daemon-jmksx node/ip-10-0-130-128.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 17 14:59:01.134 E clusteroperator/authentication changed Degraded to True: OAuthClientsDegradedError: OAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-browser-client)
Jul 17 14:59:06.869 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6c7f85fc88-9g5fp node/ip-10-0-142-167.us-west-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): iserver-operator", Name:"openshift-apiserver-operator", UID:"3ebdd910-9959-45d2-a722-9e644d93b186", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'OpenShiftAPICheckFailed' "build.openshift.io.v1" failed with HTTP status code 503 (the server is currently unable to handle the request)\nI0717 14:57:48.446392       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"3ebdd910-9959-45d2-a722-9e644d93b186", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: \"image.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"quota.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"user.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)" to "Available: \"apps.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"authorization.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"build.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)"\nI0717 14:57:53.741303       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"3ebdd910-9959-45d2-a722-9e644d93b186", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("")\nI0717 14:59:05.938342       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0717 14:59:05.938399       1 leaderelection.go:66] leaderelection lost\nF0717 14:59:05.946820       1 builder.go:217] server exited\n
Jul 17 14:59:08.958 E ns/openshift-cluster-machine-approver pod/machine-approver-769cffbc6b-fcf7z node/ip-10-0-142-167.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nI0717 14:38:00.709702       1 main.go:146] CSR csr-qb74h added\nI0717 14:38:00.709729       1 main.go:149] CSR csr-qb74h is already approved\nI0717 14:38:00.709748       1 main.go:146] CSR csr-scxt7 added\nI0717 14:38:00.709859       1 main.go:149] CSR csr-scxt7 is already approved\nI0717 14:38:00.709881       1 main.go:146] CSR csr-z7q54 added\nI0717 14:38:00.709887       1 main.go:149] CSR csr-z7q54 is already approved\nI0717 14:38:00.709895       1 main.go:146] CSR csr-78n9x added\nI0717 14:38:00.709901       1 main.go:149] CSR csr-78n9x is already approved\nI0717 14:38:00.709908       1 main.go:146] CSR csr-brktf added\nI0717 14:38:00.709914       1 main.go:149] CSR csr-brktf is already approved\nI0717 14:38:00.709960       1 main.go:146] CSR csr-f7zsp added\nI0717 14:38:00.709971       1 main.go:149] CSR csr-f7zsp is already approved\nI0717 14:38:00.709979       1 main.go:146] CSR csr-l8v8v added\nI0717 14:38:00.709985       1 main.go:149] CSR csr-l8v8v is already approved\nI0717 14:38:00.709995       1 main.go:146] CSR csr-nnhms added\nI0717 14:38:00.710762       1 main.go:149] CSR csr-nnhms is already approved\nI0717 14:38:00.710806       1 main.go:146] CSR csr-7d47v added\nI0717 14:38:00.710813       1 main.go:149] CSR csr-7d47v is already approved\nI0717 14:38:00.710823       1 main.go:146] CSR csr-7pxlb added\nI0717 14:38:00.710829       1 main.go:149] CSR csr-7pxlb is already approved\nI0717 14:38:00.710837       1 main.go:146] CSR csr-gvbg6 added\nI0717 14:38:00.710844       1 main.go:149] CSR csr-gvbg6 is already approved\nI0717 14:38:00.713501       1 main.go:146] CSR csr-qchl5 added\nI0717 14:38:00.713515       1 main.go:149] CSR csr-qchl5 is already approved\nW0717 14:52:23.284538       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 25615 (34067)\n
Jul 17 14:59:10.011 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-655b5644b4-gvdqv node/ip-10-0-142-167.us-west-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): duler.go:667] pod openshift-marketplace/community-operators-687cb59ffd-sd4pp is bound successfully on node \\\"ip-10-0-149-163.us-west-2.compute.internal\\\", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: \\\"Capacity: CPU<4>|Memory<16419376Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15268400Ki>|Pods<250>|StorageEphemeral<114381692328>.\\\".\\nI0717 14:55:44.034013       1 scheduler.go:667] pod openshift-marketplace/redhat-operators-699b487974-d7vf7 is bound successfully on node \\\"ip-10-0-149-163.us-west-2.compute.internal\\\", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: \\\"Capacity: CPU<4>|Memory<16419376Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15268400Ki>|Pods<250>|StorageEphemeral<114381692328>.\\\".\\nI0717 14:55:49.034327       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7759f86b7-69vfc: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\\nI0717 14:55:49.197155       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-754564d9b6-qj4rr is bound successfully on node \\\"ip-10-0-142-167.us-west-2.compute.internal\\\", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: \\\"Capacity: CPU<4>|Memory<15946292Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<14795316Ki>|Pods<250>|StorageEphemeral<114381692328>.\\\".\\n\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-130-128.us-west-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-130-128.us-west-2.compute.internal container=\"scheduler\" is not ready"\nI0717 14:59:09.034023       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0717 14:59:09.034129       1 leaderelection.go:66] leaderelection lost\n
Jul 17 14:59:11.309 E ns/openshift-console pod/console-867bd6fbdb-hsvww node/ip-10-0-142-167.us-west-2.compute.internal container=console container exited with code 2 (Error): ed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/07/17 14:56:06 auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/07/17 14:57:37 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/07/17 14:57:38 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/07/17 14:57:43 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/07/17 14:57:52 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Jul 17 14:59:42.598 E kube-apiserver Kube API started failing: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
Jul 17 14:59:49.535 E kube-apiserver Kube API started failing: Get https://api.ci-op-80zpv3cc-10b72.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 17 14:59:59.534 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 17 15:00:37.352 E ns/openshift-monitoring pod/node-exporter-gdl29 node/ip-10-0-137-105.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-17T14:38:15Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:15Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 17 15:00:37.382 E ns/openshift-sdn pod/ovs-zwrwq node/ip-10-0-137-105.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): 4:57:34.444Z|00196|connmgr|INFO|br0<->unix#898: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:57:34.497Z|00197|connmgr|INFO|br0<->unix#901: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:57:34.520Z|00198|bridge|INFO|bridge br0: deleted interface veth4335747e on port 5\n2020-07-17T14:57:34.566Z|00199|connmgr|INFO|br0<->unix#904: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:57:34.601Z|00200|connmgr|INFO|br0<->unix#907: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:57:34.630Z|00201|bridge|INFO|bridge br0: deleted interface veth17dbc60e on port 12\n2020-07-17T14:58:02.722Z|00025|jsonrpc|WARN|Dropped 6 log messages in last 962 seconds (most recently, 962 seconds ago) due to excessive rate\n2020-07-17T14:58:02.722Z|00026|jsonrpc|WARN|unix#842: receive error: Connection reset by peer\n2020-07-17T14:58:02.722Z|00027|reconnect|WARN|unix#842: connection dropped (Connection reset by peer)\n2020-07-17T14:58:02.685Z|00202|connmgr|INFO|br0<->unix#926: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:58:02.712Z|00203|connmgr|INFO|br0<->unix#929: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:58:02.734Z|00204|bridge|INFO|bridge br0: deleted interface vethe3107a6c on port 7\n2020-07-17T14:58:02.968Z|00028|jsonrpc|WARN|unix#847: receive error: Connection reset by peer\n2020-07-17T14:58:02.968Z|00029|reconnect|WARN|unix#847: connection dropped (Connection reset by peer)\n2020-07-17T14:58:02.931Z|00205|connmgr|INFO|br0<->unix#932: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:58:02.958Z|00206|connmgr|INFO|br0<->unix#935: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:58:02.978Z|00207|bridge|INFO|bridge br0: deleted interface vethd20d574f on port 10\n2020-07-17T14:58:18.164Z|00208|connmgr|INFO|br0<->unix#950: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:58:18.192Z|00209|connmgr|INFO|br0<->unix#953: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:58:18.212Z|00210|bridge|INFO|bridge br0: deleted interface veth32a0ef32 on port 11\n2020-07-17 14:58:48 info: Saving flows ...\n
Jul 17 15:00:37.436 E ns/openshift-multus pod/multus-xgb8s node/ip-10-0-137-105.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Jul 17 15:00:37.438 E ns/openshift-machine-config-operator pod/machine-config-daemon-5qhdd node/ip-10-0-137-105.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 17 15:00:37.465 E ns/openshift-cluster-node-tuning-operator pod/tuned-x84zt node/ip-10-0-137-105.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 8] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:57:34.168648   73856 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:57:34.349315   73856 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:57:45.449669   73856 openshift-tuned.go:550] Pod (openshift-monitoring/kube-state-metrics-58767b99bd-9vvtc) labels changed node wide: true\nI0717 14:57:49.163028   73856 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:57:49.164487   73856 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:57:49.276412   73856 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:58:15.446158   73856 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-3729/foo-7bm5c) labels changed node wide: false\nI0717 14:58:15.466005   73856 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-3729/foo-p6m2t) labels changed node wide: true\nI0717 14:58:19.163006   73856 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:58:19.164360   73856 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:58:19.275180   73856 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:58:25.445642   73856 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-6750/service-test-wgn4r) labels changed node wide: true\nI0717 14:58:29.163021   73856 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0717 14:58:29.164539   73856 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:58:29.287263   73856 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0717 14:58:47.449455   73856 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-5d77486f4d-77pn5) labels changed node wide: true\n
Jul 17 15:00:48.630 E ns/openshift-machine-config-operator pod/machine-config-daemon-5qhdd node/ip-10-0-137-105.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 17 15:01:45.884 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): eamID=2959, ErrCode=NO_ERROR, debug=""\nI0717 14:59:42.275650       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=2959, ErrCode=NO_ERROR, debug=""\nI0717 14:59:42.275808       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=2959, ErrCode=NO_ERROR, debug=""\nI0717 14:59:42.275927       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=2959, ErrCode=NO_ERROR, debug=""\nI0717 14:59:42.276049       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=2959, ErrCode=NO_ERROR, debug=""\nI0717 14:59:42.276158       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=2959, ErrCode=NO_ERROR, debug=""\nI0717 14:59:42.276268       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=2959, ErrCode=NO_ERROR, debug=""\nE0717 14:59:42.298260       1 reflector.go:280] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io)\nI0717 14:59:42.348267       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-142-167.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0717 14:59:42.348557       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nE0717 14:59:42.356906       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}\n
Jul 17 15:01:45.884 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=kube-apiserver-insecure-readyz-6 container exited with code 2 (Error): I0717 14:36:17.801957       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Jul 17 15:01:45.884 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=kube-apiserver-cert-syncer-6 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0717 14:58:01.700799       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:58:01.701162       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0717 14:58:01.906775       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:58:01.907591       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Jul 17 15:01:45.970 E ns/openshift-monitoring pod/node-exporter-rp952 node/ip-10-0-142-167.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-17T14:38:23Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-17T14:38:23Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 17 15:01:45.986 E ns/openshift-controller-manager pod/controller-manager-lq7rd node/ip-10-0-142-167.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 17 15:01:45.996 E ns/openshift-sdn pod/sdn-controller-b5mwt node/ip-10-0-142-167.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): .ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"770b23b3-596c-4e1f-bc61-ded59867e199", ResourceVersion:"34553", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730591892, loc:(*time.Location)(0x2b7dcc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-142-167\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-07-17T14:53:29Z\",\"renewTime\":\"2020-07-17T14:53:29Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-142-167 became leader'\nI0717 14:53:29.040607       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0717 14:53:29.047511       1 master.go:51] Initializing SDN master\nI0717 14:53:29.060412       1 network_controller.go:60] Started OpenShift Network Controller\nE0717 14:56:07.389710       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\nW0717 14:56:08.289707       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 27464 (37214)\nW0717 14:56:08.289929       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 27466 (37214)\n
Jul 17 15:01:46.006 E ns/openshift-multus pod/multus-admission-controller-khjcm node/ip-10-0-142-167.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Jul 17 15:01:46.017 E ns/openshift-sdn pod/ovs-nnk8t node/ip-10-0-142-167.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error):  s (4 deletes)\n2020-07-17T14:59:08.442Z|00263|bridge|INFO|bridge br0: deleted interface veth3ac8b2f6 on port 5\n2020-07-17T14:59:08.686Z|00264|connmgr|INFO|br0<->unix#1109: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:59:08.712Z|00265|connmgr|INFO|br0<->unix#1112: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:59:08.748Z|00266|bridge|INFO|bridge br0: deleted interface vethf59fa80f on port 8\n2020-07-17T14:59:09.182Z|00267|connmgr|INFO|br0<->unix#1115: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:59:09.211Z|00268|connmgr|INFO|br0<->unix#1118: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:59:09.236Z|00269|bridge|INFO|bridge br0: deleted interface veth93439c7e on port 33\n2020-07-17T14:59:09.380Z|00270|connmgr|INFO|br0<->unix#1121: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:59:09.415Z|00271|connmgr|INFO|br0<->unix#1124: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:59:09.440Z|00272|bridge|INFO|bridge br0: deleted interface veth5ce519d5 on port 19\n2020-07-17T14:59:10.063Z|00273|connmgr|INFO|br0<->unix#1130: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:59:10.092Z|00274|connmgr|INFO|br0<->unix#1133: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:59:10.122Z|00275|bridge|INFO|bridge br0: deleted interface veth6116227f on port 20\n2020-07-17T14:59:10.434Z|00276|connmgr|INFO|br0<->unix#1136: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:59:10.465Z|00277|connmgr|INFO|br0<->unix#1139: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:59:10.484Z|00278|bridge|INFO|bridge br0: deleted interface veth877816fb on port 23\n2020-07-17T14:59:34.074Z|00279|connmgr|INFO|br0<->unix#1157: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-17T14:59:34.099Z|00280|connmgr|INFO|br0<->unix#1160: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-17T14:59:34.119Z|00281|bridge|INFO|bridge br0: deleted interface veth05202e5a on port 22\n2020-07-17 14:59:42 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 17 15:01:46.057 E ns/openshift-multus pod/multus-6kh82 node/ip-10-0-142-167.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Jul 17 15:01:46.080 E ns/openshift-machine-config-operator pod/machine-config-daemon-tf9db node/ip-10-0-142-167.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 17 15:01:46.103 E ns/openshift-cluster-node-tuning-operator pod/tuned-qzbmc node/ip-10-0-142-167.us-west-2.compute.internal container=tuned container exited with code 143 (Error): o:390] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0717 14:59:33.447816  117497 openshift-tuned.go:441] Getting recommended profile...\nI0717 14:59:33.548899  117497 openshift-tuned.go:635] Active profile () != recommended profile (openshift-control-plane)\nI0717 14:59:33.548940  117497 openshift-tuned.go:263] Starting tuned...\n2020-07-17 14:59:33,643 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-07-17 14:59:33,649 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-07-17 14:59:33,649 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-07-17 14:59:33,650 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-07-17 14:59:33,651 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-07-17 14:59:33,688 INFO     tuned.daemon.controller: starting controller\n2020-07-17 14:59:33,688 INFO     tuned.daemon.daemon: starting tuning\n2020-07-17 14:59:33,694 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-07-17 14:59:33,694 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-07-17 14:59:33,697 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-07-17 14:59:33,698 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-07-17 14:59:33,699 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-07-17 14:59:33,756 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-07-17 14:59:33,766 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0717 14:59:41.005490  117497 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-89c69dd88-rcgxn) labels changed node wide: true\n2020-07-17 14:59:42,261 INFO     tuned.daemon.controller: terminating controller\n2020-07-17 14:59:42,261 INFO     tuned.daemon.daemon: stopping tuning\n
Jul 17 15:01:46.115 E ns/openshift-machine-config-operator pod/machine-config-server-vpj6v node/ip-10-0-142-167.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0717 14:51:52.282157       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-6-ga3a98da0-dirty (a3a98da0434ff1a3d5d6ad27df2237a91ebadf53)\nI0717 14:51:52.283087       1 api.go:56] Launching server on :22624\nI0717 14:51:52.283142       1 api.go:56] Launching server on :22623\n
Jul 17 15:01:51.812 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 1 (Error): ntroller controller\nE0717 14:57:17.153038       1 namespace_scc_allocation_controller.go:240] the server is currently unable to handle the request (get rangeallocations.security.openshift.io scc-uid)\nE0717 14:57:17.153050       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io)\nE0717 14:57:17.153146       1 reflector.go:126] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io)\nE0717 14:57:17.153055       1 reflector.go:126] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nI0717 14:57:18.183609       1 controller_utils.go:1034] Caches are synced for cluster resource quota controller\nE0717 14:57:20.224183       1 reflector.go:126] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io)\nE0717 14:57:23.296657       1 reflector.go:126] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io)\nE0717 14:59:42.296591       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0717 14:59:42.301586       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io)\n
Jul 17 15:01:51.812 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:58:29.796500       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:58:29.797077       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:58:39.803890       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:58:39.804190       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:58:49.811576       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:58:49.811909       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:58:59.819018       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:58:59.819300       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:59:09.833244       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:59:09.833553       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:59:19.842569       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:59:19.842982       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:59:29.850239       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:59:29.850952       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0717 14:59:39.857153       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0717 14:59:39.857486       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Jul 17 15:01:51.812 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 2 (Error): dial tcp [::1]:6443: connect: connection refused]\nE0717 14:37:33.696185       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0717 14:37:39.055629       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0717 14:37:40.706569       1 webhook.go:107] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0717 14:37:40.706598       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0717 14:37:42.406517       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0717 14:37:48.815777       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0717 14:37:55.288997       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0717 14:56:07.357412       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: etcdserver: request timed out\n
Jul 17 15:01:52.760 E ns/openshift-multus pod/multus-6kh82 node/ip-10-0-142-167.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 17 15:01:52.807 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-142-167.us-west-2.compute.internal node/ip-10-0-142-167.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): d: pod 0d3f6a42-e18c-4d23-a1fa-8d9445f7c9d9 is not added to scheduler cache, so cannot be updated\nE0717 14:54:02.732331       1 eventhandlers.go:316] scheduler cache RemovePod failed: pod 0d3f6a42-e18c-4d23-a1fa-8d9445f7c9d9 is not found in scheduler cache, so cannot be removed from it\nE0717 14:55:09.329716       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod dda646de-b491-40ca-ada5-839155df52cf is not added to scheduler cache, so cannot be updated\nE0717 14:55:10.290044       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod dda646de-b491-40ca-ada5-839155df52cf is not added to scheduler cache, so cannot be updated\nE0717 14:55:11.339311       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod dda646de-b491-40ca-ada5-839155df52cf is not added to scheduler cache, so cannot be updated\nE0717 14:55:14.632766       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod dda646de-b491-40ca-ada5-839155df52cf is not added to scheduler cache, so cannot be updated\nE0717 14:55:26.889355       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod dda646de-b491-40ca-ada5-839155df52cf is not added to scheduler cache, so cannot be updated\nE0717 14:55:29.163818       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod dda646de-b491-40ca-ada5-839155df52cf is not added to scheduler cache, so cannot be updated\nW0717 14:56:00.247407       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: too old resource version: 25578 (37168)\nE0717 14:56:07.364157       1 leaderelection.go:330] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: etcdserver: request timed out\nW0717 14:56:08.279841       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 25578 (37214)\nW0717 14:56:08.282007       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSINode ended with: too old resource version: 25627 (37214)\n
Jul 17 15:01:58.012 E ns/openshift-machine-config-operator pod/machine-config-daemon-tf9db node/ip-10-0-142-167.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error):