ResultSUCCESS
Tests 4 failed / 20 succeeded
Started2020-09-19 13:07
Elapsed1h17m
Work namespaceci-op-ciq9hg0f
pod0b2fdc2f-fa79-11ea-a1fd-0a580a800db2
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 32m51s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 8s of 29m55s (0%):

Sep 19 13:55:26.226 E ns/e2e-k8s-service-lb-available-8809 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 13:55:26.231 I ns/e2e-k8s-service-lb-available-8809 svc/service-test Service started responding to GET requests over new connections
Sep 19 13:55:37.300 E ns/e2e-k8s-service-lb-available-8809 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 13:55:37.304 I ns/e2e-k8s-service-lb-available-8809 svc/service-test Service started responding to GET requests over new connections
Sep 19 13:55:58.226 E ns/e2e-k8s-service-lb-available-8809 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 13:55:58.230 I ns/e2e-k8s-service-lb-available-8809 svc/service-test Service started responding to GET requests over new connections
Sep 19 13:56:17.320 E ns/e2e-k8s-service-lb-available-8809 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 13:56:17.326 I ns/e2e-k8s-service-lb-available-8809 svc/service-test Service started responding to GET requests over new connections
Sep 19 13:56:40.226 E ns/e2e-k8s-service-lb-available-8809 svc/service-test Service stopped responding to GET requests on reused connections
Sep 19 13:56:40.230 I ns/e2e-k8s-service-lb-available-8809 svc/service-test Service started responding to GET requests on reused connections
Sep 19 13:57:01.226 E ns/e2e-k8s-service-lb-available-8809 svc/service-test Service stopped responding to GET requests on reused connections
Sep 19 13:57:01.231 I ns/e2e-k8s-service-lb-available-8809 svc/service-test Service started responding to GET requests on reused connections
Sep 19 13:57:20.226 E ns/e2e-k8s-service-lb-available-8809 svc/service-test Service stopped responding to GET requests on reused connections
Sep 19 13:57:20.231 I ns/e2e-k8s-service-lb-available-8809 svc/service-test Service started responding to GET requests on reused connections
Sep 19 14:11:32.226 E ns/e2e-k8s-service-lb-available-8809 svc/service-test Service stopped responding to GET requests on reused connections
Sep 19 14:11:32.229 I ns/e2e-k8s-service-lb-available-8809 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1600524980.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 31m51s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 2m5s of 31m51s (7%):

Sep 19 13:53:43.994 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 19 13:53:44.032 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 19 13:53:45.994 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 13:53:46.023 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 19 13:55:24.993 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 13:55:24.994 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Sep 19 13:55:25.020 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Sep 19 13:55:25.021 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 13:55:35.993 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 13:55:36.008 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 13:55:56.993 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 13:55:56.993 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 13:55:56.994 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Sep 19 13:55:57.993 - 8s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Sep 19 13:55:57.993 - 8s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 19 13:55:57.993 - 11s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 19 13:56:01.993 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 19 13:56:02.031 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 19 13:56:07.011 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Sep 19 13:56:07.027 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 19 13:56:08.582 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 19 13:56:08.993 - 9s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Sep 19 13:56:09.008 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 13:56:10.009 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 13:56:10.015 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 13:56:10.993 E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 19 13:56:10.993 - 5s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 19 13:56:11.009 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 13:56:12.011 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 13:56:12.993 - 999ms E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 19 13:56:14.008 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 13:56:16.009 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 19 13:56:19.026 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 19 13:56:19.994 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Sep 19 13:56:20.007 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Sep 19 14:04:12.994 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 14:04:13.054 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 19 14:07:10.993 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 14:07:10.993 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 14:07:11.022 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 14:07:11.993 - 16s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 19 14:07:13.993 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 19 14:07:14.993 - 9s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Sep 19 14:07:22.993 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 14:07:23.019 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 14:07:24.021 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 19 14:07:28.044 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 19 14:10:15.993 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 14:10:16.993 - 16s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 19 14:10:16.994 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 14:10:17.993 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 19 14:10:24.993 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 19 14:10:25.993 - 6s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Sep 19 14:10:27.042 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 14:10:32.052 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 19 14:10:33.025 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 19 14:10:46.994 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 14:10:47.009 I ns/openshift-console route/console Route started responding to GET requests over new connections
				from junit_upgrade_1600524980.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 31m51s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 41s of 31m51s (2%):

Sep 19 13:55:30.912 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 19 13:55:30.917 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:05:24.912 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 19 14:05:25.912 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:05:39.917 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:05:56.915 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 19 14:05:56.920 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:08:41.912 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Sep 19 14:08:41.918 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:08:57.912 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 19 14:08:58.912 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:08:58.917 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:07.611 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:07.622 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:10.676 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:10.681 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:13.748 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:13.756 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:16.820 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:16.912 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:09:19.897 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:22.965 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:22.970 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:26.036 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:26.042 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:29.108 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:29.114 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:32.180 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:32.191 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:35.252 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:35.261 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:38.325 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:38.912 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:09:41.413 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:44.468 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:44.912 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:09:47.545 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:50.613 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:50.912 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:09:50.917 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:09:56.756 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:09:56.761 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:12:09.912 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 19 14:12:09.917 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:12:25.912 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 19 14:12:26.912 - 6s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:12:34.522 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:12:37.588 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:12:37.912 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:12:43.744 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:12:49.876 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:12:49.912 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:12:52.954 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:13:02.164 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:13:02.169 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:13:05.236 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:13:05.241 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:13:08.309 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:13:08.320 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:13:14.452 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:13:14.463 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:13:20.596 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:13:20.602 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:13:23.668 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:13:23.912 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:13:26.746 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:13:33.012 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:13:33.912 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:13:36.089 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:13:39.156 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:13:39.162 I openshift-apiserver OpenShift API started responding to GET requests
Sep 19 14:13:42.228 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 19 14:13:42.912 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:13:42.925 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1600524980.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 32m55s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
180 error level events were detected during this test run:

Sep 19 13:47:19.377 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-9f68594fb-gclp9 node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=kube-controller-manager-operator container exited with code 255 (Error): ion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal\" not ready since 2020-09-19 13:40:25 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)"\nI0919 13:40:32.520170       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"1a2a56b4-6114-4da0-a929-f1329f792332", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal\" not ready since 2020-09-19 13:40:25 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)" to "NodeControllerDegraded: All master nodes are ready"\nW0919 13:42:45.686683       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19469 (19530)\nW0919 13:45:32.622011       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20389 (20886)\nW0919 13:45:44.815368       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20886 (20978)\nI0919 13:47:18.522592       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 13:47:18.522653       1 leaderelection.go:66] leaderelection lost\nI0919 13:47:18.524383       1 resourcesync_controller.go:227] Shutting down ResourceSyncController\n
Sep 19 13:47:23.395 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7455ffdd58-blpcz node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=kube-scheduler-operator-container container exited with code 255 (Error): or", Name:"openshift-kube-scheduler-operator", UID:"5589fbf7-bad2-4b61-ac60-ffa6b8aae92b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal\" not ready since 2020-09-19 13:40:25 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)" to "NodeControllerDegraded: All master nodes are ready"\nI0919 13:40:32.552938       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"5589fbf7-bad2-4b61-ac60-ffa6b8aae92b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal\" not ready since 2020-09-19 13:40:25 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)" to "NodeControllerDegraded: All master nodes are ready"\nW0919 13:42:45.687141       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19469 (19530)\nW0919 13:45:32.624883       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20389 (20886)\nW0919 13:45:44.818613       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20886 (20978)\nI0919 13:47:22.486940       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 13:47:22.486993       1 leaderelection.go:66] leaderelection lost\n
Sep 19 13:48:01.940 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=kube-apiserver-6 container exited with code 1 (Error):  13:48:01.301905       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0919 13:48:01.301913       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0919 13:48:01.301920       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0919 13:48:01.301928       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0919 13:48:01.301935       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0919 13:48:01.301943       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0919 13:48:01.301951       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0919 13:48:01.301958       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0919 13:48:01.301966       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0919 13:48:01.301974       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0919 13:48:01.301986       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0919 13:48:01.302004       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0919 13:48:01.302014       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0919 13:48:01.302022       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0919 13:48:01.302065       1 server.go:692] external host was not specified, using 10.0.0.4\nI0919 13:48:01.302272       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0919 13:48:01.302554       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 19 13:48:23.041 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=kube-apiserver-6 container exited with code 1 (Error):  13:48:22.290592       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0919 13:48:22.290596       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0919 13:48:22.290601       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0919 13:48:22.290605       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0919 13:48:22.290609       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0919 13:48:22.290614       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0919 13:48:22.290618       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0919 13:48:22.290622       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0919 13:48:22.290626       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0919 13:48:22.290630       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0919 13:48:22.290637       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0919 13:48:22.290643       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0919 13:48:22.290654       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0919 13:48:22.290661       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0919 13:48:22.290688       1 server.go:692] external host was not specified, using 10.0.0.4\nI0919 13:48:22.290841       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0919 13:48:22.291105       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 19 13:48:43.653 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-7bfc66f499-g2vks node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=openshift-apiserver-operator container exited with code 255 (Error): ith: too old resource version: 13363 (16078)\nW0919 13:37:30.846047       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 13233 (16065)\nW0919 13:37:30.851499       1 reflector.go:299] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftAPIServer ended with: too old resource version: 11940 (16519)\nW0919 13:37:30.853213       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 13252 (16065)\nW0919 13:37:30.853391       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 13273 (16065)\nW0919 13:37:30.854592       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 13267 (16065)\nW0919 13:37:31.410575       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 16066 (16632)\nW0919 13:37:31.605042       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 16064 (16632)\nW0919 13:42:45.686415       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19469 (19530)\nW0919 13:45:32.624577       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20389 (20886)\nW0919 13:45:44.819057       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20886 (20978)\nI0919 13:48:43.129617       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 13:48:43.129710       1 leaderelection.go:66] leaderelection lost\n
Sep 19 13:48:52.188 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=kube-apiserver-6 container exited with code 1 (Error):  13:48:51.312206       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0919 13:48:51.312214       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0919 13:48:51.312222       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0919 13:48:51.312230       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0919 13:48:51.312237       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0919 13:48:51.312244       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0919 13:48:51.312252       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0919 13:48:51.312258       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0919 13:48:51.312265       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0919 13:48:51.312273       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0919 13:48:51.312284       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0919 13:48:51.312294       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0919 13:48:51.312305       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0919 13:48:51.312314       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0919 13:48:51.312354       1 server.go:692] external host was not specified, using 10.0.0.4\nI0919 13:48:51.312503       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0919 13:48:51.312925       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 19 13:48:54.114 E ns/openshift-machine-api pod/machine-api-operator-5bfb747c76-d2svg node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=machine-api-operator container exited with code 2 (Error): 
Sep 19 13:50:22.116 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-6 container exited with code 1 (Error):  13:50:21.533110       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0919 13:50:21.533119       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0919 13:50:21.533126       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0919 13:50:21.533134       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0919 13:50:21.533141       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0919 13:50:21.533149       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0919 13:50:21.533156       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0919 13:50:21.533163       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0919 13:50:21.533170       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0919 13:50:21.533178       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0919 13:50:21.533191       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0919 13:50:21.533200       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0919 13:50:21.533210       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0919 13:50:21.533221       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0919 13:50:21.533263       1 server.go:692] external host was not specified, using 10.0.0.5\nI0919 13:50:21.533451       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0919 13:50:21.533743       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 19 13:50:39.192 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-6 container exited with code 1 (Error):  13:50:38.424516       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0919 13:50:38.424524       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0919 13:50:38.424531       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0919 13:50:38.424538       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0919 13:50:38.424545       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0919 13:50:38.424552       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0919 13:50:38.424558       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0919 13:50:38.424565       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0919 13:50:38.424573       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0919 13:50:38.424580       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0919 13:50:38.424592       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0919 13:50:38.424603       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0919 13:50:38.424611       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0919 13:50:38.424619       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0919 13:50:38.424658       1 server.go:692] external host was not specified, using 10.0.0.5\nI0919 13:50:38.424826       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0919 13:50:38.425068       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 19 13:51:12.344 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-6 container exited with code 1 (Error):  13:51:11.631411       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0919 13:51:11.631418       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0919 13:51:11.631425       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0919 13:51:11.631481       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0919 13:51:11.631488       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0919 13:51:11.631495       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0919 13:51:11.631502       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0919 13:51:11.631508       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0919 13:51:11.631514       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0919 13:51:11.631521       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0919 13:51:11.631531       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0919 13:51:11.631540       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0919 13:51:11.631548       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0919 13:51:11.631556       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0919 13:51:11.631593       1 server.go:692] external host was not specified, using 10.0.0.5\nI0919 13:51:11.631755       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0919 13:51:11.632063       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 19 13:51:15.361 E ns/openshift-cluster-machine-approver pod/machine-approver-56b5c69d56-9srvl node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=machine-approver-controller container exited with code 2 (Error): -api authorization for ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal\nI0919 13:33:42.620124       1 main.go:196] CSR csr-7dmgj approved\nI0919 13:33:53.868707       1 main.go:146] CSR csr-w7xtt added\nI0919 13:33:53.903256       1 csr_check.go:418] retrieving serving cert from ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal (10.0.32.2:10250)\nW0919 13:33:53.905355       1 csr_check.go:178] Failed to retrieve current serving cert: remote error: tls: internal error\nI0919 13:33:53.905388       1 csr_check.go:183] Falling back to machine-api authorization for ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal\nI0919 13:33:53.914507       1 main.go:196] CSR csr-w7xtt approved\nI0919 13:33:55.069185       1 main.go:146] CSR csr-6fwpj added\nI0919 13:33:55.095151       1 csr_check.go:418] retrieving serving cert from ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal (10.0.32.4:10250)\nW0919 13:33:55.097213       1 csr_check.go:178] Failed to retrieve current serving cert: remote error: tls: internal error\nI0919 13:33:55.097244       1 csr_check.go:183] Falling back to machine-api authorization for ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal\nI0919 13:33:55.104370       1 main.go:196] CSR csr-6fwpj approved\nI0919 13:35:41.634473       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0919 13:35:41.634997       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:238: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=14125&timeoutSeconds=499&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 13:35:42.635610       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Sep 19 13:51:23.239 E ns/openshift-insights pod/insights-operator-65b6d8b5bd-v6n94 node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=operator container exited with code 2 (Error): 1       1 httplog.go:90] GET /metrics: (1.685611ms) 200 [Prometheus/2.14.0 10.129.2.13:43822]\nI0919 13:48:25.418036       1 httplog.go:90] GET /metrics: (10.365721ms) 200 [Prometheus/2.14.0 10.131.0.14:45536]\nI0919 13:48:30.459349       1 httplog.go:90] GET /metrics: (1.905013ms) 200 [Prometheus/2.14.0 10.129.2.13:43822]\nI0919 13:48:44.598099       1 configobserver.go:68] Refreshing configuration from cluster pull secret\nI0919 13:48:44.604915       1 configobserver.go:93] Found cloud.openshift.com token\nI0919 13:48:44.605046       1 configobserver.go:110] Refreshing configuration from cluster secret\nI0919 13:48:55.415941       1 httplog.go:90] GET /metrics: (8.399125ms) 200 [Prometheus/2.14.0 10.131.0.14:45536]\nI0919 13:49:00.459552       1 httplog.go:90] GET /metrics: (2.101686ms) 200 [Prometheus/2.14.0 10.129.2.13:43822]\nI0919 13:49:25.426315       1 httplog.go:90] GET /metrics: (18.679023ms) 200 [Prometheus/2.14.0 10.131.0.14:45536]\nI0919 13:49:30.459116       1 httplog.go:90] GET /metrics: (1.43885ms) 200 [Prometheus/2.14.0 10.129.2.13:43822]\nI0919 13:49:44.588828       1 status.go:158] Number of last upload failures 2 lower than threshold 5. Not marking as degraded.\nI0919 13:49:44.588861       1 status.go:304] The operator is marked as disabled\nI0919 13:49:44.588957       1 status.go:423] No status update necessary, objects are identical\nI0919 13:49:55.415343       1 httplog.go:90] GET /metrics: (7.817104ms) 200 [Prometheus/2.14.0 10.131.0.14:45536]\nI0919 13:50:00.460401       1 httplog.go:90] GET /metrics: (2.667627ms) 200 [Prometheus/2.14.0 10.129.2.13:43822]\nI0919 13:50:25.416448       1 httplog.go:90] GET /metrics: (8.91237ms) 200 [Prometheus/2.14.0 10.131.0.14:45536]\nI0919 13:50:30.459505       1 httplog.go:90] GET /metrics: (1.817499ms) 200 [Prometheus/2.14.0 10.129.2.13:43822]\nI0919 13:50:55.414559       1 httplog.go:90] GET /metrics: (6.973061ms) 200 [Prometheus/2.14.0 10.131.0.14:45536]\nI0919 13:51:00.462104       1 httplog.go:90] GET /metrics: (4.352954ms) 200 [Prometheus/2.14.0 10.129.2.13:43822]\n
Sep 19 13:51:24.312 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-b8b85thx7 node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=operator container exited with code 255 (Error): 4.0 10.131.0.14:44382]\nI0919 13:47:52.354733       1 httplog.go:90] GET /metrics: (2.059187ms) 200 [Prometheus/2.14.0 10.129.2.13:59834]\nI0919 13:48:15.556697       1 httplog.go:90] GET /metrics: (7.43335ms) 200 [Prometheus/2.14.0 10.131.0.14:44382]\nI0919 13:48:22.354589       1 httplog.go:90] GET /metrics: (1.790869ms) 200 [Prometheus/2.14.0 10.129.2.13:59834]\nI0919 13:48:45.555746       1 httplog.go:90] GET /metrics: (6.732814ms) 200 [Prometheus/2.14.0 10.131.0.14:44382]\nI0919 13:48:52.354709       1 httplog.go:90] GET /metrics: (2.117828ms) 200 [Prometheus/2.14.0 10.129.2.13:59834]\nI0919 13:49:15.556583       1 httplog.go:90] GET /metrics: (7.268629ms) 200 [Prometheus/2.14.0 10.131.0.14:44382]\nI0919 13:49:22.354292       1 httplog.go:90] GET /metrics: (1.671985ms) 200 [Prometheus/2.14.0 10.129.2.13:59834]\nI0919 13:49:40.371290       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 1 items received\nI0919 13:49:45.556065       1 httplog.go:90] GET /metrics: (6.74972ms) 200 [Prometheus/2.14.0 10.131.0.14:44382]\nI0919 13:49:52.354630       1 httplog.go:90] GET /metrics: (1.956618ms) 200 [Prometheus/2.14.0 10.129.2.13:59834]\nI0919 13:50:15.556912       1 httplog.go:90] GET /metrics: (7.538548ms) 200 [Prometheus/2.14.0 10.131.0.14:44382]\nI0919 13:50:22.354439       1 httplog.go:90] GET /metrics: (1.684102ms) 200 [Prometheus/2.14.0 10.129.2.13:59834]\nI0919 13:50:45.556243       1 httplog.go:90] GET /metrics: (6.970844ms) 200 [Prometheus/2.14.0 10.131.0.14:44382]\nI0919 13:50:52.354826       1 httplog.go:90] GET /metrics: (1.992453ms) 200 [Prometheus/2.14.0 10.129.2.13:59834]\nI0919 13:51:15.557844       1 httplog.go:90] GET /metrics: (8.705622ms) 200 [Prometheus/2.14.0 10.131.0.14:44382]\nI0919 13:51:22.355280       1 httplog.go:90] GET /metrics: (2.543856ms) 200 [Prometheus/2.14.0 10.129.2.13:59834]\nI0919 13:51:23.537604       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 13:51:23.537655       1 leaderelection.go:66] leaderelection lost\n
Sep 19 13:51:26.258 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Could not update deployment "openshift-authentication-operator/authentication-operator" (159 of 508)\n* Could not update deployment "openshift-cloud-credential-operator/cloud-credential-operator" (142 of 508)\n* Could not update deployment "openshift-cluster-samples-operator/cluster-samples-operator" (256 of 508)\n* Could not update deployment "openshift-console/downloads" (326 of 508)\n* Could not update deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (240 of 508)\n* Could not update deployment "openshift-image-registry/cluster-image-registry-operator" (197 of 508)\n* Could not update deployment "openshift-machine-api/cluster-autoscaler-operator" (180 of 508)\n* Could not update deployment "openshift-marketplace/marketplace-operator" (385 of 508)\n* Could not update deployment "openshift-operator-lifecycle-manager/olm-operator" (364 of 508)
Sep 19 13:51:50.333 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2020/09/19 13:40:42 Watching directory: "/etc/alertmanager/config"\n
Sep 19 13:51:50.333 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/19 13:40:43 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 13:40:43 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 13:40:43 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 13:40:43 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 13:40:43 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 13:40:43 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 13:40:43 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 13:40:43 http.go:106: HTTPS: listening on [::]:9095\n
Sep 19 13:51:57.253 E ns/openshift-monitoring pod/node-exporter-9qcnw node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 9-19T13:34:31Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T13:34:31Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 13:51:59.298 E ns/openshift-monitoring pod/thanos-querier-8548bd7ddf-7cj97 node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 2 (Error): 2020/09/19 13:41:39 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/19 13:41:39 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 13:41:39 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 13:41:39 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/19 13:41:39 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 13:41:39 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/19 13:41:39 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 13:41:39 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/19 13:41:39 http.go:106: HTTPS: listening on [::]:9091\n
Sep 19 13:52:02.275 E ns/openshift-monitoring pod/prometheus-adapter-5cd7c9b57d-lkqzc node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I0919 13:40:51.824998       1 adapter.go:93] successfully using in-cluster auth\nI0919 13:40:52.328780       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 19 13:52:07.216 E ns/openshift-monitoring pod/grafana-6874f7d95c-ftvd4 node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=grafana-proxy container exited with code 2 (Error): 
Sep 19 13:52:26.604 E ns/openshift-ingress pod/router-default-6f765bdbf5-btjbv node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): r is currently unable to handle the request (get routes.route.openshift.io)\nI0919 13:51:16.262992       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 13:51:21.262064       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nE0919 13:51:26.265176       1 limiter.go:140] error reloading router: wait: no child processes\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 13:51:48.766546       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 13:51:53.766951       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 13:51:58.768514       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 13:52:03.767118       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 13:52:08.776583       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 13:52:13.778257       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 13:52:18.813978       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 13:52:23.773456       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Sep 19 13:52:27.416 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T13:52:24.500Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T13:52:24.504Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T13:52:24.504Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T13:52:24.505Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T13:52:24.505Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-09-19T13:52:24.505Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T13:52:24.505Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T13:52:24.505Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T13:52:24.505Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T13:52:24.505Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T13:52:24.505Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T13:52:24.505Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T13:52:24.506Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T13:52:24.506Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T13:52:24.507Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T13:52:24.507Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 13:52:32.203 E ns/openshift-controller-manager pod/controller-manager-8knlq node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 137 (Error): 
Sep 19 13:52:33.221 E ns/openshift-monitoring pod/node-exporter-hdfdr node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 9-19T13:31:40Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T13:31:40Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 13:52:35.631 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/09/19 13:42:12 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 19 13:52:35.631 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=prometheus-proxy container exited with code 2 (Error): 2020/09/19 13:42:13 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 13:42:13 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 13:42:13 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 13:42:13 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/19 13:42:13 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 13:42:13 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 13:42:13 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 13:42:13 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/19 13:42:13 http.go:106: HTTPS: listening on [::]:9091\n2020/09/19 13:45:37 oauthproxy.go:774: basicauth: 10.129.2.7:47290 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 13:50:07 oauthproxy.go:774: basicauth: 10.129.2.7:49072 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 13:52:03 oauthproxy.go:774: basicauth: 10.129.0.55:43380 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 13:52:03 oauthproxy.go:774: basicauth: 10.128.2.26:50664 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 19 13:52:35.631 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-19T13:42:12.487274943Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-09-19T13:42:12.487437856Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-09-19T13:42:12.490053059Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-19T13:42:17.639086294Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 19 13:52:37.270 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=kube-apiserver-6 container exited with code 1 (Error):  13:52:36.646894       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0919 13:52:36.646902       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0919 13:52:36.646908       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0919 13:52:36.646916       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0919 13:52:36.646923       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0919 13:52:36.646929       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0919 13:52:36.646936       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0919 13:52:36.646942       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0919 13:52:36.646949       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0919 13:52:36.646956       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0919 13:52:36.646966       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0919 13:52:36.646972       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0919 13:52:36.646977       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0919 13:52:36.646983       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0919 13:52:36.647011       1 server.go:692] external host was not specified, using 10.0.0.3\nI0919 13:52:36.647139       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0919 13:52:36.647365       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 19 13:52:42.322 E ns/openshift-monitoring pod/node-exporter-q2dzm node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 9-19T13:32:01Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T13:32:01Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 13:52:50.536 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T13:52:47.278Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T13:52:47.286Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T13:52:47.286Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T13:52:47.287Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T13:52:47.287Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-09-19T13:52:47.287Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T13:52:47.287Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T13:52:47.287Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T13:52:47.287Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T13:52:47.287Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T13:52:47.287Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T13:52:47.287Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T13:52:47.287Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T13:52:47.287Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T13:52:47.290Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T13:52:47.290Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 13:52:52.660 E ns/openshift-marketplace pod/redhat-operators-5c6bfd488b-977gz node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=redhat-operators container exited with code 2 (Error): 
Sep 19 13:52:56.424 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=kube-apiserver-6 container exited with code 1 (Error):  13:52:55.634257       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0919 13:52:55.634262       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0919 13:52:55.634265       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0919 13:52:55.634269       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0919 13:52:55.634273       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0919 13:52:55.634277       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0919 13:52:55.634282       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0919 13:52:55.634285       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0919 13:52:55.634289       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0919 13:52:55.634293       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0919 13:52:55.634300       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0919 13:52:55.634306       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0919 13:52:55.634311       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0919 13:52:55.634316       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0919 13:52:55.634342       1 server.go:692] external host was not specified, using 10.0.0.3\nI0919 13:52:55.634503       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0919 13:52:55.634736       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 19 13:53:03.698 E ns/openshift-marketplace pod/certified-operators-5549cf49f8-tzstv node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 
Sep 19 13:53:12.273 E ns/openshift-controller-manager pod/controller-manager-562sn node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 137 (Error): 
Sep 19 13:53:27.496 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=kube-apiserver-6 container exited with code 1 (Error):  13:53:26.604013       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0919 13:53:26.604017       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0919 13:53:26.604021       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0919 13:53:26.604026       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0919 13:53:26.604029       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0919 13:53:26.604033       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0919 13:53:26.604038       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0919 13:53:26.604042       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0919 13:53:26.604045       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0919 13:53:26.604052       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0919 13:53:26.604058       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0919 13:53:26.604063       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0919 13:53:26.604068       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0919 13:53:26.604072       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0919 13:53:26.604097       1 server.go:692] external host was not specified, using 10.0.0.3\nI0919 13:53:26.604219       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0919 13:53:26.604431       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 19 13:53:35.608 E ns/openshift-service-ca pod/service-serving-cert-signer-7c7fffdb9-6xd97 node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Sep 19 13:53:35.633 E ns/openshift-service-ca pod/apiservice-cabundle-injector-5748569f99-cw424 node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Sep 19 13:53:38.633 E ns/openshift-console pod/console-6dd8bbb88b-nbv47 node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=console container exited with code 2 (Error): 2020/09/19 13:35:11 cmd/main: cookies are secure!\n2020/09/19 13:35:11 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/19 13:35:21 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/19 13:35:31 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/19 13:35:41 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/19 13:35:51 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/19 13:36:01 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/19 13:36:11 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/19 13:36:21 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/19 13:36:32 cmd/main: Binding to [::]:8443...\n2020/09/19 13:36:32 cmd/main: using TLS\n
Sep 19 13:54:51.915 E ns/openshift-controller-manager pod/controller-manager-qm4gk node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 137 (Error): 
Sep 19 13:55:13.996 E ns/openshift-sdn pod/sdn-controller-qpxhg node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=sdn-controller container exited with code 2 (Error): I0919 13:21:27.597455       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0919 13:51:41.614459       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.0.2:6443: connect: connection refused\n
Sep 19 13:55:21.028 E ns/openshift-sdn pod/sdn-controller-4xcmd node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=sdn-controller container exited with code 2 (Error): true: dial tcp 10.0.0.2:6443: connect: connection refused\nE0919 13:54:09.720381       1 reflector.go:280] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.HostSubnet: Get https://api-int.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/apis/network.openshift.io/v1/hostsubnets?allowWatchBookmarks=true&resourceVersion=17667&timeout=7m15s&timeoutSeconds=435&watch=true: dial tcp 10.0.0.2:6443: connect: connection refused\nE0919 13:54:09.720381       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: Get https://api-int.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=20111&timeout=5m14s&timeoutSeconds=314&watch=true: dial tcp 10.0.0.2:6443: connect: connection refused\nE0919 13:54:09.727650       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: Get https://api-int.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=27973&timeout=5m58s&timeoutSeconds=358&watch=true: dial tcp 10.0.0.2:6443: connect: connection refused\nW0919 13:54:16.383042       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Node ended with: too old resource version: 27973 (28298)\nE0919 13:54:16.383136       1 reflector.go:280] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.NetNamespace: the server is currently unable to handle the request (get netnamespaces.network.openshift.io)\nW0919 13:54:16.383283       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 20111 (28298)\nE0919 13:54:16.385223       1 reflector.go:280] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.HostSubnet: the server is currently unable to handle the request (get hostsubnets.network.openshift.io)\n
Sep 19 13:55:25.059 E ns/openshift-sdn pod/sdn-h96jf node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): obin.go:218] Delete endpoint 10.130.0.58:8443 for service "openshift-controller-manager/controller-manager:https"\nI0919 13:54:20.952606    2627 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:54:20.952635    2627 proxier.go:350] userspace syncProxyRules took 39.651008ms\nI0919 13:54:21.130132    2627 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:54:21.130224    2627 proxier.go:350] userspace syncProxyRules took 45.961024ms\nI0919 13:54:51.155940    2627 pod.go:540] CNI_DEL openshift-controller-manager/controller-manager-qm4gk\nI0919 13:54:51.304445    2627 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:54:51.304470    2627 proxier.go:350] userspace syncProxyRules took 38.349299ms\nI0919 13:54:53.621926    2627 pod.go:504] CNI_ADD openshift-controller-manager/controller-manager-zp4c7 got IP 10.130.0.62, ofport 63\nI0919 13:54:55.940190    2627 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-controller-manager/controller-manager:https to [10.128.0.109:8443 10.129.0.64:8443 10.130.0.62:8443]\nI0919 13:54:55.940241    2627 roundrobin.go:218] Delete endpoint 10.130.0.62:8443 for service "openshift-controller-manager/controller-manager:https"\nI0919 13:54:56.101895    2627 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:54:56.101923    2627 proxier.go:350] userspace syncProxyRules took 47.138505ms\nI0919 13:55:11.518900    2627 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.7:6443 10.129.0.4:6443]\nI0919 13:55:11.519032    2627 roundrobin.go:218] Delete endpoint 10.130.0.15:6443 for service "openshift-multus/multus-admission-controller:"\nI0919 13:55:11.668775    2627 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:55:11.668815    2627 proxier.go:350] userspace syncProxyRules took 34.956088ms\nF0919 13:55:24.153509    2627 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 19 13:55:28.913 E ns/openshift-sdn pod/sdn-controller-fjchg node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=sdn-controller container exited with code 2 (Error): 9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.0.2:6443: connect: connection refused\nE0919 13:49:29.547632       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.0.2:6443: connect: connection refused\nE0919 13:49:49.398568       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: configmaps "openshift-network-controller" is forbidden: User "system:serviceaccount:openshift-sdn:sdn-controller" cannot get resource "configmaps" in API group "" in the namespace "openshift-sdn": RBAC: [clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "openshift-sdn-controller" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]\n
Sep 19 13:55:42.014 E ns/openshift-multus pod/multus-hjt4r node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 13:55:42.623 E ns/openshift-multus pod/multus-admission-controller-t9sk6 node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=multus-admission-controller container exited with code 137 (Error): 
Sep 19 13:55:52.180 E ns/openshift-sdn pod/sdn-p72gs node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): 5.941010    2036 roundrobin.go:218] Delete endpoint 10.130.0.62:8443 for service "openshift-controller-manager/controller-manager:https"\nI0919 13:54:56.128631    2036 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:54:56.128681    2036 proxier.go:350] userspace syncProxyRules took 41.180657ms\nI0919 13:55:11.516769    2036 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.7:6443 10.129.0.4:6443]\nI0919 13:55:11.516812    2036 roundrobin.go:218] Delete endpoint 10.130.0.15:6443 for service "openshift-multus/multus-admission-controller:"\nI0919 13:55:11.686254    2036 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:55:11.686284    2036 proxier.go:350] userspace syncProxyRules took 41.817368ms\nI0919 13:55:37.239717    2036 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-authentication/oauth-openshift:https to [10.129.0.63:6443]\nI0919 13:55:37.239772    2036 roundrobin.go:218] Delete endpoint 10.130.0.59:6443 for service "openshift-authentication/oauth-openshift:https"\nI0919 13:55:37.388018    2036 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:55:37.388057    2036 proxier.go:350] userspace syncProxyRules took 37.290663ms\nI0919 13:55:46.244984    2036 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-authentication/oauth-openshift:https to [10.129.0.63:6443 10.130.0.59:6443]\nI0919 13:55:46.245034    2036 roundrobin.go:218] Delete endpoint 10.130.0.59:6443 for service "openshift-authentication/oauth-openshift:https"\nI0919 13:55:46.390360    2036 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:55:46.390392    2036 proxier.go:350] userspace syncProxyRules took 37.306387ms\nI0919 13:55:50.633433    2036 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0919 13:55:51.162819    2036 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 19 13:55:53.677 E ns/openshift-service-ca pod/service-serving-cert-signer-7bd46b9884-znphn node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Sep 19 13:56:13.163 E ns/openshift-sdn pod/sdn-bw95l node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error):  13:55:55.334584   87065 roundrobin.go:218] Delete endpoint 10.128.2.19:80 for service "e2e-k8s-service-lb-available-8809/service-test:"\nI0919 13:55:55.497942   87065 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:55:55.497973   87065 proxier.go:350] userspace syncProxyRules took 44.248977ms\nI0919 13:56:02.721823   87065 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-authentication/oauth-openshift:https to [10.129.0.63:6443]\nI0919 13:56:02.721925   87065 roundrobin.go:218] Delete endpoint 10.130.0.59:6443 for service "openshift-authentication/oauth-openshift:https"\nI0919 13:56:02.894109   87065 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:56:02.894149   87065 proxier.go:350] userspace syncProxyRules took 47.637716ms\nI0919 13:56:06.241386   87065 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-authentication/oauth-openshift:https to [10.129.0.63:6443 10.130.0.59:6443]\nI0919 13:56:06.241429   87065 roundrobin.go:218] Delete endpoint 10.130.0.59:6443 for service "openshift-authentication/oauth-openshift:https"\nI0919 13:56:06.404666   87065 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:56:06.404693   87065 proxier.go:350] userspace syncProxyRules took 42.173653ms\nI0919 13:56:09.346424   87065 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-8809/service-test: to [10.128.2.19:80 10.129.2.18:80]\nI0919 13:56:09.346461   87065 roundrobin.go:218] Delete endpoint 10.128.2.19:80 for service "e2e-k8s-service-lb-available-8809/service-test:"\nI0919 13:56:09.526737   87065 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:56:09.526772   87065 proxier.go:350] userspace syncProxyRules took 43.493171ms\nI0919 13:56:12.327490   87065 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0919 13:56:12.853843   87065 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 19 13:56:27.814 E ns/openshift-multus pod/multus-54hpv node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 13:56:34.418 E ns/openshift-sdn pod/sdn-c7k84 node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): ProxyRules took 123.581072ms\nI0919 13:56:28.067019  101198 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:56:28.067044  101198 proxier.go:350] userspace syncProxyRules took 145.475951ms\nI0919 13:56:28.134690  101198 proxier.go:1552] Opened local port "nodePort for openshift-ingress/router-default:http" (:31130/tcp)\nI0919 13:56:28.135454  101198 proxier.go:1552] Opened local port "nodePort for e2e-k8s-service-lb-available-8809/service-test:" (:31900/tcp)\nI0919 13:56:28.135908  101198 proxier.go:1552] Opened local port "nodePort for openshift-ingress/router-default:https" (:31020/tcp)\nI0919 13:56:28.173859  101198 healthcheck.go:151] Opening healthcheck "openshift-ingress/router-default" on port 32252\nI0919 13:56:28.195554  101198 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0919 13:56:28.195667  101198 cmd.go:173] openshift-sdn network plugin registering startup\nI0919 13:56:28.195800  101198 cmd.go:177] openshift-sdn network plugin ready\nI0919 13:56:29.089249  101198 pod.go:540] CNI_DEL openshift-multus/multus-admission-controller-9xcct\nI0919 13:56:31.074683  101198 ovs.go:180] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0919 13:56:31.581589  101198 ovs.go:180] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0919 13:56:32.212443  101198 ovs.go:180] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0919 13:56:33.001308  101198 ovs.go:180] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0919 13:56:33.517138  101198 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0919 13:56:34.044756  101198 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 19 13:56:46.325 E ns/openshift-service-ca pod/apiservice-cabundle-injector-c6d7bf44d-75z2r node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Sep 19 13:56:54.536 E ns/openshift-sdn pod/sdn-f7tbl node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error):    68411 service.go:357] Adding new service port "openshift-ingress/router-default:http" at 172.30.126.20:80/TCP\nI0919 13:56:46.326253   68411 service.go:357] Adding new service port "openshift-ingress/router-default:https" at 172.30.126.20:443/TCP\nI0919 13:56:46.326268   68411 service.go:357] Adding new service port "openshift-cloud-credential-operator/cco-metrics:cco-metrics" at 172.30.235.236:2112/TCP\nI0919 13:56:46.326284   68411 service.go:357] Adding new service port "openshift-machine-api/machine-api-operator:https" at 172.30.245.154:8443/TCP\nI0919 13:56:46.326298   68411 service.go:357] Adding new service port "openshift-authentication-operator/metrics:https" at 172.30.209.190:443/TCP\nI0919 13:56:46.326596   68411 proxier.go:731] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0919 13:56:46.399528   68411 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:56:46.399552   68411 proxier.go:350] userspace syncProxyRules took 71.997647ms\nI0919 13:56:46.449080   68411 proxier.go:1552] Opened local port "nodePort for openshift-ingress/router-default:https" (:31020/tcp)\nI0919 13:56:46.449237   68411 proxier.go:1552] Opened local port "nodePort for openshift-ingress/router-default:http" (:31130/tcp)\nI0919 13:56:46.449445   68411 proxier.go:1552] Opened local port "nodePort for e2e-k8s-service-lb-available-8809/service-test:" (:31900/tcp)\nI0919 13:56:46.480922   68411 healthcheck.go:151] Opening healthcheck "openshift-ingress/router-default" on port 32252\nI0919 13:56:46.498291   68411 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0919 13:56:46.498316   68411 cmd.go:173] openshift-sdn network plugin registering startup\nI0919 13:56:46.498415   68411 cmd.go:177] openshift-sdn network plugin ready\nI0919 13:56:53.521238   68411 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0919 13:56:54.045625   68411 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 19 13:57:07.542 E ns/openshift-multus pod/multus-4mr2b node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 13:57:14.234 E ns/openshift-sdn pod/sdn-7k6cf node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): g 0 service events\nI0919 13:56:06.376037   79008 proxier.go:350] userspace syncProxyRules took 35.47726ms\nI0919 13:56:09.346299   79008 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-8809/service-test: to [10.128.2.19:80 10.129.2.18:80]\nI0919 13:56:09.346328   79008 roundrobin.go:218] Delete endpoint 10.128.2.19:80 for service "e2e-k8s-service-lb-available-8809/service-test:"\nI0919 13:56:09.495757   79008 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:56:09.495779   79008 proxier.go:350] userspace syncProxyRules took 34.405666ms\nI0919 13:56:39.637959   79008 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:56:39.637985   79008 proxier.go:350] userspace syncProxyRules took 36.348891ms\nI0919 13:56:54.506223   79008 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.112:6443 10.129.0.4:6443 10.130.0.63:6443]\nI0919 13:56:54.506256   79008 roundrobin.go:218] Delete endpoint 10.128.0.112:6443 for service "openshift-multus/multus-admission-controller:"\nI0919 13:56:54.523689   79008 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.112:6443 10.130.0.63:6443]\nI0919 13:56:54.523728   79008 roundrobin.go:218] Delete endpoint 10.129.0.4:6443 for service "openshift-multus/multus-admission-controller:"\nI0919 13:56:54.648590   79008 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:56:54.648616   79008 proxier.go:350] userspace syncProxyRules took 37.643473ms\nI0919 13:56:54.788216   79008 proxier.go:371] userspace proxy: processing 0 service events\nI0919 13:56:54.788244   79008 proxier.go:350] userspace syncProxyRules took 33.792115ms\nI0919 13:57:13.151393   79008 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0919 13:57:13.676919   79008 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 19 13:57:25.504 E ns/openshift-multus pod/multus-admission-controller-d69tr node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=multus-admission-controller container exited with code 137 (Error): 
Sep 19 13:57:46.778 E ns/openshift-multus pod/multus-7wbtr node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 13:58:25.793 E ns/openshift-multus pod/multus-f8r4r node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 13:59:07.812 E ns/openshift-multus pod/multus-2gcf4 node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 14:01:25.643 E ns/openshift-machine-config-operator pod/machine-config-daemon-2mlvk node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 19 14:01:33.746 E ns/openshift-machine-config-operator pod/machine-config-daemon-8fxk6 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 19 14:01:42.559 E ns/openshift-machine-config-operator pod/machine-config-daemon-vtv2w node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 19 14:01:49.224 E ns/openshift-machine-config-operator pod/machine-config-daemon-swd76 node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 19 14:01:52.741 E ns/openshift-machine-config-operator pod/machine-config-daemon-bn726 node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 19 14:02:14.667 E ns/openshift-machine-config-operator pod/machine-config-controller-54cc8569cd-rrbrb node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=machine-config-controller container exited with code 2 (Error): licy ended with: too old resource version: 22420 (28147)\nW0919 13:53:45.676372       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 27341 (28147)\nW0919 13:53:45.712155       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 20981 (28145)\nW0919 13:53:45.712392       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Scheduler ended with: too old resource version: 20985 (28138)\nW0919 13:53:45.712399       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 22420 (28145)\nW0919 13:53:45.716278       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.KubeletConfig ended with: too old resource version: 22420 (28147)\nW0919 13:53:45.760718       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ContainerRuntimeConfig ended with: too old resource version: 22420 (28147)\nI0919 13:53:46.750724       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool worker\nI0919 13:53:46.814931       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool master\nW0919 13:58:22.299048       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 30161 (30379)\nW0919 13:58:25.189890       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 30379 (30393)\n
Sep 19 14:03:57.666 E ns/openshift-monitoring pod/thanos-querier-745dd77fcc-lzw6j node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 2 (Error): 2020/09/19 13:51:54 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/19 13:51:54 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 13:51:54 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 13:51:54 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/19 13:51:54 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 13:51:54 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/19 13:51:54 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 13:51:54 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/19 13:51:54 http.go:106: HTTPS: listening on [::]:9091\n
Sep 19 14:03:57.682 E ns/openshift-monitoring pod/prometheus-adapter-65687f747c-75rk6 node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I0919 13:52:00.550897       1 adapter.go:93] successfully using in-cluster auth\nI0919 13:52:01.497723       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 19 14:03:57.808 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2020/09/19 13:52:39 Watching directory: "/etc/alertmanager/config"\n
Sep 19 14:03:57.808 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/19 13:52:40 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 13:52:40 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 13:52:40 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 13:52:40 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 13:52:40 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 13:52:40 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 13:52:40 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 13:52:40 http.go:106: HTTPS: listening on [::]:9095\n2020/09/19 13:56:11 reverseproxy.go:447: http: proxy error: context canceled\n
Sep 19 14:03:57.826 E ns/openshift-marketplace pod/certified-operators-f97f567cb-8ppcx node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 
Sep 19 14:04:01.273 E ns/openshift-service-ca pod/apiservice-cabundle-injector-c6d7bf44d-75z2r node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Sep 19 14:04:02.577 E ns/openshift-machine-config-operator pod/machine-config-controller-877bbddc7-jk7q7 node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=machine-config-controller container exited with code 2 (Error): econfiguration.openshift.io/v1  } {MachineConfig  99-master-ssh  machineconfiguration.openshift.io/v1  }]\nI0919 14:03:49.507227       1 render_controller.go:516] Pool worker: now targeting: rendered-worker-d1de4edb4860689ae37f20da9714ae3f\nI0919 14:03:49.524597       1 render_controller.go:516] Pool master: now targeting: rendered-master-65d48da6042034e8b7195036a60f2739\nI0919 14:03:54.528073       1 node_controller.go:758] Setting node ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal to desired config rendered-master-65d48da6042034e8b7195036a60f2739\nI0919 14:03:54.534274       1 node_controller.go:758] Setting node ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal to desired config rendered-worker-d1de4edb4860689ae37f20da9714ae3f\nI0919 14:03:54.559740       1 node_controller.go:452] Pool master: node ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-65d48da6042034e8b7195036a60f2739\nI0919 14:03:54.559831       1 node_controller.go:452] Pool worker: node ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-d1de4edb4860689ae37f20da9714ae3f\nI0919 14:03:55.573061       1 node_controller.go:452] Pool worker: node ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal changed machineconfiguration.openshift.io/state = Working\nI0919 14:03:55.575587       1 node_controller.go:452] Pool master: node ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal changed machineconfiguration.openshift.io/state = Working\nI0919 14:03:55.591075       1 node_controller.go:433] Pool worker: node ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal is now reporting unready: node ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal is reporting Unschedulable\nI0919 14:03:55.603214       1 node_controller.go:433] Pool master: node ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal is now reporting unready: node ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal is reporting Unschedulable\n
Sep 19 14:04:03.983 E ns/openshift-console-operator pod/console-operator-c6dbcdb98-g65pf node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=console-operator container exited with code 255 (Error): SyncLoopRefreshProgressing InProgress Working toward version 4.3.0-0.ci-2020-09-18-202802\nE0919 13:53:33.224570       1 status.go:73] DeploymentAvailable FailedUpdate 2 replicas ready at version 4.3.0-0.ci-2020-09-18-202802\nE0919 13:53:33.327899       1 status.go:73] SyncLoopRefreshProgressing InProgress Working toward version 4.3.0-0.ci-2020-09-18-202802\nE0919 13:53:33.327936       1 status.go:73] DeploymentAvailable FailedUpdate 2 replicas ready at version 4.3.0-0.ci-2020-09-18-202802\nI0919 13:53:38.200251       1 status_controller.go:175] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-19T13:31:44Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-19T13:53:38Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-19T13:53:38Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-19T13:31:44Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0919 13:53:38.222814       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"5b3ec3a1-6700-45ea-98f6-2828d0fa20ee", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False (""),Available changed from False to True ("")\nW0919 13:58:22.187229       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 30265 (30376)\nW0919 13:58:24.974717       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 30376 (30387)\nI0919 14:04:00.390344       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 14:04:00.390407       1 leaderelection.go:66] leaderelection lost\n
Sep 19 14:04:05.209 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-5b9f77df44-kjz6b node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=cluster-node-tuning-operator container exited with code 255 (Error): nexpected EOF\nI0919 13:53:43.598280       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0919 13:53:43.896506       1 reflector.go:299] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: watch of *v1.ServiceAccount ended with: too old resource version: 20098 (25446)\nW0919 13:53:43.952188       1 reflector.go:299] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: watch of *v1.ConfigMap ended with: too old resource version: 25908 (27714)\nW0919 13:53:44.459807       1 reflector.go:299] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: watch of *v1.Tuned ended with: too old resource version: 25756 (28138)\nI0919 13:53:45.462652       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0919 13:53:45.462682       1 status.go:25] syncOperatorStatus()\nI0919 13:53:45.475369       1 tuned_controller.go:188] syncServiceAccount()\nI0919 13:53:45.475584       1 tuned_controller.go:215] syncClusterRole()\nI0919 13:53:45.515317       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0919 13:53:45.566751       1 tuned_controller.go:281] syncClusterConfigMap()\nI0919 13:53:45.571659       1 tuned_controller.go:281] syncClusterConfigMap()\nI0919 13:53:45.580446       1 tuned_controller.go:320] syncDaemonSet()\nI0919 14:02:02.593589       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0919 14:02:02.593643       1 status.go:25] syncOperatorStatus()\nI0919 14:02:02.607027       1 tuned_controller.go:188] syncServiceAccount()\nI0919 14:02:02.607288       1 tuned_controller.go:215] syncClusterRole()\nI0919 14:02:02.643038       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0919 14:02:02.684446       1 tuned_controller.go:281] syncClusterConfigMap()\nI0919 14:02:02.690769       1 tuned_controller.go:281] syncClusterConfigMap()\nI0919 14:02:02.696608       1 tuned_controller.go:320] syncDaemonSet()\nF0919 14:04:03.201395       1 main.go:82] <nil>\n
Sep 19 14:04:05.273 E ns/openshift-authentication-operator pod/authentication-operator-86869898df-qxsdh node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=operator container exited with code 255 (Error): ID:"654f90af-cdc2-4fd1-b404-06ba30b87549", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: dial tcp 35.185.121.200:443: i/o timeout" to "",Progressing changed from True to False ("")\nW0919 13:58:22.187630       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 30265 (30376)\nW0919 13:58:24.960184       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 30376 (30387)\nI0919 14:03:59.570229       1 status_controller.go:166] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-19T13:35:05Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-19T14:03:59Z","message":"Progressing: not all deployment replicas are ready","reason":"ProgressingOAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-09-19T13:39:54Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-19T13:31:25Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0919 14:03:59.602357       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"654f90af-cdc2-4fd1-b404-06ba30b87549", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("Progressing: not all deployment replicas are ready")\nI0919 14:04:03.374499       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 14:04:03.374550       1 leaderelection.go:66] leaderelection lost\n
Sep 19 14:04:05.318 E ns/openshift-service-ca-operator pod/service-ca-operator-58f9d8499d-nkn56 node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=operator container exited with code 255 (Error): 
Sep 19 14:04:05.422 E ns/openshift-insights pod/insights-operator-7c88fbb696-9fwzt node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=operator container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-insights_insights-operator-7c88fbb696-9fwzt_0cf8c980-81f7-484b-9f1e-8ba57f5d02c8/operator/0.log": lstat /var/log/pods/openshift-insights_insights-operator-7c88fbb696-9fwzt_0cf8c980-81f7-484b-9f1e-8ba57f5d02c8/operator/0.log: no such file or directory
Sep 19 14:04:06.334 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T14:04:03.040Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T14:04:03.041Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T14:04:03.046Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T14:04:03.047Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T14:04:03.047Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-09-19T14:04:03.048Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T14:04:03.048Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T14:04:03.048Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T14:04:03.048Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T14:04:03.048Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T14:04:03.048Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T14:04:03.048Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T14:04:03.048Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T14:04:03.049Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T14:04:03.060Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T14:04:03.060Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 14:05:09.294 E kube-apiserver failed contacting the API: Get https://api.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=33425&timeout=6m26s&timeoutSeconds=386&watch=true: dial tcp 35.231.2.27:6443: connect: connection refused
Sep 19 14:05:09.294 E kube-apiserver failed contacting the API: Get https://api.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=34204&timeout=7m52s&timeoutSeconds=472&watch=true: dial tcp 35.231.2.27:6443: connect: connection refused
Sep 19 14:05:09.705 E ns/openshift-marketplace pod/redhat-operators-8469cd4679-5skkz node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=redhat-operators container exited with code 2 (Error): 
Sep 19 14:05:10.411 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:05:22.737 E ns/openshift-marketplace pod/certified-operators-f97f567cb-bs79m node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 
Sep 19 14:05:36.526 E ns/openshift-marketplace pod/community-operators-bf9d49586-bbhb9 node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 2 (Error): 
Sep 19 14:05:40.411 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:06:00.794 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 19 14:06:43.932 E ns/openshift-monitoring pod/node-exporter-fg9bf node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 9-19T13:52:00Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:00Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:06:43.986 E ns/openshift-sdn pod/ovs-q9qgc node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=openvswitch container exited with code 1 (Error): s in the last 0 s (2 deletes)\n2020-09-19T14:03:56.802Z|00146|connmgr|INFO|br0<->unix#436: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:03:56.849Z|00147|bridge|INFO|bridge br0: deleted interface vethdcbff3fb on port 14\n2020-09-19T14:03:56.903Z|00148|connmgr|INFO|br0<->unix#439: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:03:56.971Z|00149|connmgr|INFO|br0<->unix#442: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:03:57.015Z|00150|bridge|INFO|bridge br0: deleted interface veth11e19c3a on port 4\n2020-09-19T14:03:57.062Z|00151|connmgr|INFO|br0<->unix#445: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:03:57.104Z|00152|connmgr|INFO|br0<->unix#448: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:03:57.137Z|00153|bridge|INFO|bridge br0: deleted interface vethc1c1069a on port 13\n2020-09-19T14:04:26.007Z|00154|connmgr|INFO|br0<->unix#469: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:04:26.039Z|00155|connmgr|INFO|br0<->unix#472: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:04:26.067Z|00156|bridge|INFO|bridge br0: deleted interface veth3efe887b on port 5\n2020-09-19T14:04:26.124Z|00157|connmgr|INFO|br0<->unix#475: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:04:26.158Z|00158|connmgr|INFO|br0<->unix#478: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:04:26.185Z|00159|bridge|INFO|bridge br0: deleted interface vethdc20152e on port 7\n2020-09-19T14:04:41.416Z|00013|jsonrpc|WARN|unix#437: receive error: Connection reset by peer\n2020-09-19T14:04:41.416Z|00014|reconnect|WARN|unix#437: connection dropped (Connection reset by peer)\n2020-09-19T14:04:41.367Z|00160|connmgr|INFO|br0<->unix#493: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:04:41.397Z|00161|connmgr|INFO|br0<->unix#496: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:04:41.423Z|00162|bridge|INFO|bridge br0: deleted interface veth4a557d86 on port 6\n2020-09-19 14:05:23 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Sep 19 14:06:44.003 E ns/openshift-multus pod/multus-wkv7l node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 143 (Error): 
Sep 19 14:06:44.031 E ns/openshift-machine-config-operator pod/machine-config-daemon-2xlwc node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 19 14:06:44.049 E ns/openshift-cluster-node-tuning-operator pod/tuned-zldlf node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=tuned container exited with code 143 (Error): ineIntel platform\n2020-09-19 14:04:36,482 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-19 14:04:36,484 INFO     tuned.plugins.base: instance disk: assigning devices sda, dm-0\n2020-09-19 14:04:36,485 INFO     tuned.plugins.base: instance net: assigning devices ens4\n2020-09-19 14:04:36,563 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-19 14:04:36,575 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0919 14:04:42.088350   94162 openshift-tuned.go:550] Pod (openshift-cluster-node-tuning-operator/tuned-xszqf) labels changed node wide: false\nI0919 14:04:42.783820   94162 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-8809/service-test-4m8ws) labels changed node wide: true\nI0919 14:04:46.179568   94162 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:04:46.183308   94162 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:04:46.306981   94162 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:04:52.097083   94162 openshift-tuned.go:550] Pod (openshift-marketplace/certified-operators-f97f567cb-8ppcx) labels changed node wide: true\nI0919 14:04:56.179434   94162 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:04:56.182542   94162 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:04:56.350785   94162 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:05:09.281079   94162 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0919 14:05:09.284546   94162 openshift-tuned.go:881] Pod event watch channel closed.\nI0919 14:05:09.284566   94162 openshift-tuned.go:883] Increasing resyncPeriod to 134\nI0919 14:05:23.675379   94162 openshift-tuned.go:137] Received signal: terminated\n
Sep 19 14:06:52.205 E ns/openshift-machine-config-operator pod/machine-config-daemon-2xlwc node/ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 19 14:06:59.098 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=cluster-policy-controller-6 container exited with code 1 (Error): 925953       1 controller_utils.go:1034] Caches are synced for namespace-security-allocation-controller controller\nI0919 13:50:07.072108       1 controller_utils.go:1034] Caches are synced for resource quota controller\nI0919 13:50:07.574326       1 controller_utils.go:1034] Caches are synced for cluster resource quota controller\nW0919 14:00:35.115685       1 reflector.go:289] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: The resourceVersion for the provided watch is too old.\nW0919 14:00:35.361440       1 reflector.go:289] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: The resourceVersion for the provided watch is too old.\nW0919 14:04:29.433283       1 reflector.go:289] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: The resourceVersion for the provided watch is too old.\nW0919 14:04:30.324342       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ServiceAccount ended with: too old resource version: 22556 (33967)\nW0919 14:04:30.339193       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.Ingress ended with: too old resource version: 22557 (33967)\nW0919 14:04:30.339360       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.Ingress ended with: too old resource version: 22557 (33968)\nW0919 14:04:30.358256       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.NetworkPolicy ended with: too old resource version: 22557 (33968)\nW0919 14:04:30.358621       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Namespace ended with: too old resource version: 22555 (33968)\nW0919 14:05:05.378684       1 reflector.go:289] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\n
Sep 19 14:06:59.098 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:03:55.144550       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:03:55.145011       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:04:05.164578       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:04:05.165077       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:04:15.178964       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:04:15.179837       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:04:25.198503       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:04:25.198891       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:04:35.208934       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:04:35.209377       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:04:45.220096       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:04:45.220557       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:04:55.231902       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:04:55.232359       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:05:05.242317       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:05:05.242702       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Sep 19 14:06:59.098 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=kube-controller-manager-6 container exited with code 2 (Error): 49:33.816495       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0919 13:49:38.643031       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0919 13:49:49.041982       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found]\n
Sep 19 14:06:59.180 E ns/openshift-monitoring pod/node-exporter-rqb7b node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 9-19T13:52:31Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:31Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:06:59.206 E ns/openshift-controller-manager pod/controller-manager-mzlcd node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 1 (Error): 
Sep 19 14:06:59.239 E ns/openshift-sdn pod/sdn-controller-62bpq node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=sdn-controller container exited with code 2 (Error): I0919 13:55:40.164329       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Sep 19 14:06:59.257 E ns/openshift-sdn pod/ovs-h85xc node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=openvswitch container exited with code 1 (Error): e last 0 s (4 deletes)\n2020-09-19T14:04:04.163Z|00292|bridge|INFO|bridge br0: deleted interface veth8e84db28 on port 10\n2020-09-19T14:04:04.223Z|00293|connmgr|INFO|br0<->unix#696: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:04:04.282Z|00294|connmgr|INFO|br0<->unix#699: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:04:04.364Z|00295|bridge|INFO|bridge br0: deleted interface veth56d76063 on port 28\n2020-09-19T14:04:04.427Z|00296|connmgr|INFO|br0<->unix#702: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:04:04.534Z|00297|connmgr|INFO|br0<->unix#705: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:04:04.597Z|00298|bridge|INFO|bridge br0: deleted interface veth7f80dc02 on port 16\n2020-09-19T14:04:04.856Z|00011|jsonrpc|WARN|unix#611: receive error: Connection reset by peer\n2020-09-19T14:04:04.856Z|00012|reconnect|WARN|unix#611: connection dropped (Connection reset by peer)\n2020-09-19T14:04:24.754Z|00299|connmgr|INFO|br0<->unix#725: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:04:24.789Z|00300|connmgr|INFO|br0<->unix#728: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:04:24.826Z|00301|bridge|INFO|bridge br0: deleted interface veth1f50258d on port 22\n2020-09-19T14:04:32.480Z|00302|bridge|INFO|bridge br0: added interface vethbbe05218 on port 35\n2020-09-19T14:04:32.534Z|00303|connmgr|INFO|br0<->unix#736: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T14:04:32.608Z|00304|connmgr|INFO|br0<->unix#740: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:04:32.611Z|00305|connmgr|INFO|br0<->unix#742: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-09-19T14:04:34.677Z|00306|connmgr|INFO|br0<->unix#745: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:04:34.739Z|00307|connmgr|INFO|br0<->unix#748: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:04:34.794Z|00308|bridge|INFO|bridge br0: deleted interface vethbbe05218 on port 35\n2020-09-19 14:05:08 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Sep 19 14:06:59.279 E ns/openshift-multus pod/multus-admission-controller-b6jkb node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=multus-admission-controller container exited with code 137 (Error): 
Sep 19 14:06:59.303 E ns/openshift-multus pod/multus-kv9nh node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 143 (Error): 
Sep 19 14:06:59.334 E ns/openshift-machine-config-operator pod/machine-config-daemon-6w7zx node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 19 14:06:59.382 E ns/openshift-machine-config-operator pod/machine-config-server-pmwcm node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=machine-config-server container exited with code 2 (Error): I0919 14:04:04.522636       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-12-g747de90f-dirty (747de90fbfb379582694160dcc1181734c795695)\nI0919 14:04:04.527490       1 api.go:56] Launching server on :22623\nI0919 14:04:04.527491       1 api.go:56] Launching server on :22624\n
Sep 19 14:06:59.433 E ns/openshift-cluster-node-tuning-operator pod/tuned-q27d7 node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=tuned container exited with code 143 (Error): 919 14:04:36.381680  109987 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:04:36.397645  109987 openshift-tuned.go:390] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0919 14:04:36.403528  109987 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:04:36.633675  109987 openshift-tuned.go:635] Active profile () != recommended profile (openshift-control-plane)\nI0919 14:04:36.633761  109987 openshift-tuned.go:263] Starting tuned...\n2020-09-19 14:04:36,823 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-09-19 14:04:36,836 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-09-19 14:04:36,836 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-09-19 14:04:36,838 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-09-19 14:04:36,840 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-09-19 14:04:36,901 INFO     tuned.daemon.controller: starting controller\n2020-09-19 14:04:36,901 INFO     tuned.daemon.daemon: starting tuning\n2020-09-19 14:04:36,908 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-09-19 14:04:36,909 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-09-19 14:04:36,916 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-19 14:04:36,919 INFO     tuned.plugins.base: instance disk: assigning devices sda, dm-0\n2020-09-19 14:04:36,920 INFO     tuned.plugins.base: instance net: assigning devices ens4\n2020-09-19 14:04:37,023 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-19 14:04:37,034 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0919 14:04:39.034978  109987 openshift-tuned.go:550] Pod (openshift-cluster-node-tuning-operator/tuned-89vpq) labels changed node wide: false\n
Sep 19 14:07:02.203 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=kube-apiserver-6 container exited with code 1 (Error):  desc = "transport: Error while dialing dial tcp 10.0.0.4:2379: connect: connection refused". Reconnecting...\nW0919 14:05:09.060610       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.0.4:2379: connect: connection refused". Reconnecting...\nW0919 14:05:09.060778       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.0.4:2379: connect: connection refused". Reconnecting...\nW0919 14:05:09.060820       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.0.4:2379: connect: connection refused". Reconnecting...\nW0919 14:05:09.060783       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.0.4:2379: connect: connection refused". Reconnecting...\nW0919 14:05:09.060938       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.0.4:2379: connect: connection refused". Reconnecting...\nW0919 14:05:09.061025       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.0.4:2379: connect: connection refused". Reconnecting...\n
Sep 19 14:07:02.203 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=kube-apiserver-cert-syncer-6 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 13:59:50.160485       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 13:59:50.160948       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 13:59:50.370126       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 13:59:50.370539       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Sep 19 14:07:02.203 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=kube-apiserver-insecure-readyz-6 container exited with code 2 (Error): o get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:49:29.125560       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:49:29.307355       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:49:31.275774       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:49:33.747193       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:49:34.126940       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:49:34.304666       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:49:36.277606       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:49:38.748540       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:49:39.127334       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:49:39.305613       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:49:41.279544       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\n
Sep 19 14:07:03.398 E ns/openshift-etcd pod/etcd-member-ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=etcd-metrics container exited with code 2 (Error): 2020-09-19 14:04:35.338689 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 14:04:35.339566 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 14:04:35.340009 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 14:04:35.342315 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/09/19 14:04:35 grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.0.4:9978: connect: connection refused". Reconnecting...\n
Sep 19 14:07:04.394 E ns/openshift-monitoring pod/telemeter-client-86769469b7-gkknx node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=reload container exited with code 2 (Error): 
Sep 19 14:07:04.394 E ns/openshift-monitoring pod/telemeter-client-86769469b7-gkknx node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=telemeter-client container exited with code 2 (Error): 
Sep 19 14:07:04.796 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal container=scheduler container exited with code 2 (Error): e found feasible. Bound node resource: "Capacity: CPU<4>|Memory<15387080Ki>|Pods<250>|StorageEphemeral<133665772Ki>; Allocatable: CPU<3500m>|Memory<14236104Ki>|Pods<250>|StorageEphemeral<122112633448>.".\nI0919 14:04:30.092587       1 scheduler.go:667] pod openshift-cluster-node-tuning-operator/tuned-s2vh7 is bound successfully on node "ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<15387080Ki>|Pods<250>|StorageEphemeral<133665772Ki>; Allocatable: CPU<3500m>|Memory<14236104Ki>|Pods<250>|StorageEphemeral<122112633448>.".\nI0919 14:04:30.886220       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6bd795cbf5-54xbs: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0919 14:04:35.894303       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6bd795cbf5-54xbs: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0919 14:04:44.902037       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6bd795cbf5-54xbs: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0919 14:04:55.903810       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6bd795cbf5-54xbs: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Sep 19 14:07:08.175 E ns/openshift-multus pod/multus-kv9nh node/ci-op-ftt6b-m-2.c.openshift-gce-devel-ci.internal invariant violation: pod may not transition Running->Pending
Sep 19 14:07:19.247 E ns/openshift-authentication-operator pod/authentication-operator-86869898df-v8pzn node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=operator container exited with code 255 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-authentication-operator_authentication-operator-86869898df-v8pzn_86547ac3-9320-40fe-a72e-fe8604d4618f/operator/0.log": lstat /var/log/pods/openshift-authentication-operator_authentication-operator-86869898df-v8pzn_86547ac3-9320-40fe-a72e-fe8604d4618f/operator/0.log: no such file or directory
Sep 19 14:07:19.802 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-5fd689x2k node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=operator container exited with code 255 (Error): 8] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0919 14:05:10.377759       1 reflector.go:158] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:134\nI0919 14:05:10.561277       1 reflector.go:158] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:134\nI0919 14:05:10.638299       1 reflector.go:158] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:134\nI0919 14:05:10.638687       1 reflector.go:158] Listing and watching *v1.ServiceCatalogControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0919 14:05:34.024018       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 4 items received\nI0919 14:05:36.897458       1 httplog.go:90] GET /metrics: (25.010148ms) 200 [Prometheus/2.14.0 10.131.0.32:38882]\nI0919 14:05:38.126521       1 httplog.go:90] GET /metrics: (2.018253ms) 200 [Prometheus/2.14.0 10.128.2.32:47464]\nI0919 14:05:53.226540       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 12 items received\nI0919 14:06:06.877758       1 httplog.go:90] GET /metrics: (6.296988ms) 200 [Prometheus/2.14.0 10.131.0.32:38882]\nI0919 14:06:08.127206       1 httplog.go:90] GET /metrics: (2.211436ms) 200 [Prometheus/2.14.0 10.128.2.32:47464]\nI0919 14:06:36.878046       1 httplog.go:90] GET /metrics: (6.579371ms) 200 [Prometheus/2.14.0 10.131.0.32:38882]\nI0919 14:06:38.126278       1 httplog.go:90] GET /metrics: (1.920435ms) 200 [Prometheus/2.14.0 10.128.2.32:47464]\nI0919 14:07:06.881340       1 httplog.go:90] GET /metrics: (9.885247ms) 200 [Prometheus/2.14.0 10.131.0.32:38882]\nI0919 14:07:16.064216       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 14:07:16.064377       1 leaderelection.go:66] leaderelection lost\n
Sep 19 14:07:21.165 E ns/openshift-machine-config-operator pod/machine-config-operator-579c974fbf-bk69k node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=machine-config-operator container exited with code 2 (Error): , Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"8026f585-8ba6-4fa9-b3b5-bc77597144e1", ResourceVersion:"31502", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63736118509, loc:(*time.Location)(0x271c960)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-579c974fbf-bk69k_4f8d79c9-1fb1-4844-8155-577a4608b035\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-09-19T14:01:24Z\",\"renewTime\":\"2020-09-19T14:01:24Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-579c974fbf-bk69k_4f8d79c9-1fb1-4844-8155-577a4608b035 became leader'\nI0919 14:01:24.070302       1 leaderelection.go:251] successfully acquired lease openshift-machine-config-operator/machine-config\nI0919 14:01:24.815850       1 operator.go:246] Starting MachineConfigOperator\nI0919 14:01:24.821523       1 event.go:255] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"06565e0b-f0db-4933-aec5-17e6d6453102", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 4.3.0-0.ci-2020-09-12-051632}] to [{operator 4.3.0-0.ci-2020-09-18-202802}]\nW0919 14:04:30.329780       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 28137 (33967)\n
Sep 19 14:07:21.357 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-5b9f77df44-hkqf9 node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=cluster-node-tuning-operator container exited with code 255 (Error): Map()\nI0919 14:05:45.549863       1 tuned_controller.go:320] syncDaemonSet()\nI0919 14:06:00.511064       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0919 14:06:00.517001       1 status.go:25] syncOperatorStatus()\nI0919 14:06:00.615512       1 tuned_controller.go:188] syncServiceAccount()\nI0919 14:06:00.615897       1 tuned_controller.go:215] syncClusterRole()\nI0919 14:06:00.717411       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0919 14:06:00.840271       1 tuned_controller.go:281] syncClusterConfigMap()\nI0919 14:06:00.847077       1 tuned_controller.go:281] syncClusterConfigMap()\nI0919 14:06:00.855181       1 tuned_controller.go:320] syncDaemonSet()\nI0919 14:06:47.868373       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0919 14:06:47.868411       1 status.go:25] syncOperatorStatus()\nI0919 14:06:47.884255       1 tuned_controller.go:188] syncServiceAccount()\nI0919 14:06:47.884424       1 tuned_controller.go:215] syncClusterRole()\nI0919 14:06:47.939533       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0919 14:06:47.981781       1 tuned_controller.go:281] syncClusterConfigMap()\nI0919 14:06:47.986996       1 tuned_controller.go:281] syncClusterConfigMap()\nI0919 14:06:47.990980       1 tuned_controller.go:320] syncDaemonSet()\nI0919 14:07:04.697024       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0919 14:07:04.697058       1 status.go:25] syncOperatorStatus()\nI0919 14:07:04.714492       1 tuned_controller.go:188] syncServiceAccount()\nI0919 14:07:04.714633       1 tuned_controller.go:215] syncClusterRole()\nI0919 14:07:04.749939       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0919 14:07:04.787072       1 tuned_controller.go:281] syncClusterConfigMap()\nI0919 14:07:04.797459       1 tuned_controller.go:281] syncClusterConfigMap()\nI0919 14:07:04.801626       1 tuned_controller.go:320] syncDaemonSet()\nF0919 14:07:18.508545       1 main.go:82] <nil>\n
Sep 19 14:07:21.437 E ns/openshift-cluster-machine-approver pod/machine-approver-7498ffbb98-mvp6m node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=machine-approver-controller container exited with code 2 (Error): imit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 13:51:53.039002       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 13:52:00.136021       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:serviceaccount:openshift-cluster-machine-approver:machine-approver-sa" cannot list resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:controller:machine-approver" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found]\n
Sep 19 14:07:21.478 E ns/openshift-service-ca pod/service-serving-cert-signer-7bd46b9884-znphn node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Sep 19 14:07:22.785 E ns/openshift-console-operator pod/console-operator-c6dbcdb98-2d6jw node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=console-operator container exited with code 255 (Error): o/oauth/informers/externalversions/factory.go:101 (started: 2020-09-19 14:05:21.444055409 +0000 UTC m=+63.844194495) (total time: 1m0.003490051s):\nTrace[1966791165]: [1m0.003490051s] [1m0.003490051s] END\nE0919 14:06:21.447595       1 reflector.go:123] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: Failed to list *v1.OAuthClient: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io)\nE0919 14:07:19.263593       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 4.3.0-0.ci-2020-09-18-202802\nI0919 14:07:19.302428       1 status_controller.go:175] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-19T13:31:44Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-19T13:53:38Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-19T14:07:19Z","message":"DeploymentAvailable: 1 replicas ready at version 4.3.0-0.ci-2020-09-18-202802","reason":"Deployment_FailedUpdate","status":"False","type":"Available"},{"lastTransitionTime":"2020-09-19T13:31:44Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0919 14:07:19.337687       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"5b3ec3a1-6700-45ea-98f6-2828d0fa20ee", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from True to False ("DeploymentAvailable: 1 replicas ready at version 4.3.0-0.ci-2020-09-18-202802")\nE0919 14:07:19.597336       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 4.3.0-0.ci-2020-09-18-202802\nI0919 14:07:21.328839       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 14:07:21.328997       1 leaderelection.go:66] leaderelection lost\n
Sep 19 14:09:36.952 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Sep 19 14:09:49.083 E ns/openshift-monitoring pod/node-exporter-99lrr node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 9-19T13:52:19Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:19Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:09:49.133 E ns/openshift-sdn pod/ovs-ljlxw node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=openvswitch container exited with code 1 (Error): in the last 0 s (4 deletes)\n2020-09-19T14:07:02.376Z|00195|bridge|INFO|bridge br0: deleted interface veth5c5c6318 on port 13\n2020-09-19T14:07:02.438Z|00196|connmgr|INFO|br0<->unix#674: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:07:02.482Z|00197|connmgr|INFO|br0<->unix#677: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:07:02.572Z|00198|bridge|INFO|bridge br0: deleted interface vethc865f311 on port 18\n2020-09-19T14:07:02.622Z|00199|connmgr|INFO|br0<->unix#680: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:07:02.676Z|00200|connmgr|INFO|br0<->unix#683: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:07:02.716Z|00201|bridge|INFO|bridge br0: deleted interface vethb2c94b39 on port 12\n2020-09-19T14:07:02.792Z|00202|connmgr|INFO|br0<->unix#686: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:07:02.847Z|00203|connmgr|INFO|br0<->unix#689: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:07:02.943Z|00204|bridge|INFO|bridge br0: deleted interface vethf9f9aca1 on port 7\n2020-09-19T14:07:02.992Z|00205|connmgr|INFO|br0<->unix#692: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:07:03.038Z|00206|connmgr|INFO|br0<->unix#695: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:07:03.087Z|00207|bridge|INFO|bridge br0: deleted interface veth647fd6f9 on port 3\n2020-09-19T14:07:03.138Z|00208|connmgr|INFO|br0<->unix#698: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:07:03.204Z|00209|connmgr|INFO|br0<->unix#701: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:07:03.253Z|00210|bridge|INFO|bridge br0: deleted interface vethe3f4dd4b on port 16\n2020-09-19T14:07:46.652Z|00211|connmgr|INFO|br0<->unix#737: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:07:46.686Z|00212|connmgr|INFO|br0<->unix#740: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:07:46.718Z|00213|bridge|INFO|bridge br0: deleted interface veth7f59714e on port 8\n2020-09-19 14:08:29 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Sep 19 14:09:49.195 E ns/openshift-multus pod/multus-lw6xl node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 143 (Error): 
Sep 19 14:09:49.215 E ns/openshift-cluster-node-tuning-operator pod/tuned-tj6hx node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=tuned container exited with code 143 (Error): :263] Starting tuned...\n2020-09-19 14:07:43,929 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-09-19 14:07:43,938 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-09-19 14:07:43,938 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-09-19 14:07:43,940 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-09-19 14:07:43,941 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-09-19 14:07:43,979 INFO     tuned.daemon.controller: starting controller\n2020-09-19 14:07:43,979 INFO     tuned.daemon.daemon: starting tuning\n2020-09-19 14:07:43,986 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-09-19 14:07:43,987 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-09-19 14:07:43,992 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-19 14:07:43,994 INFO     tuned.plugins.base: instance disk: assigning devices sda, dm-0\n2020-09-19 14:07:43,996 INFO     tuned.plugins.base: instance net: assigning devices ens4\n2020-09-19 14:07:44,067 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-19 14:07:44,078 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0919 14:07:46.871228  104342 openshift-tuned.go:550] Pod (openshift-cluster-node-tuning-operator/tuned-nksft) labels changed node wide: false\nI0919 14:07:56.876497  104342 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-8809/service-test-5l628) labels changed node wide: true\nI0919 14:07:58.654960  104342 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:07:58.658172  104342 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:07:58.849458  104342 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Sep 19 14:09:49.281 E ns/openshift-machine-config-operator pod/machine-config-daemon-gdbf2 node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 19 14:09:55.941 E ns/openshift-machine-config-operator pod/machine-config-daemon-gdbf2 node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 19 14:10:00.895 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 19 14:10:07.684 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2020/09/19 13:52:13 Watching directory: "/etc/alertmanager/config"\n
Sep 19 14:10:07.684 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/19 13:52:13 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 13:52:13 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 13:52:13 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 13:52:13 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 13:52:13 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 13:52:13 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 13:52:13 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 13:52:13 http.go:106: HTTPS: listening on [::]:9095\n2020/09/19 13:56:10 reverseproxy.go:447: http: proxy error: context canceled\n
Sep 19 14:10:07.737 E ns/openshift-marketplace pod/community-operators-7fb89c8b54-gbftd node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 2 (Error): 
Sep 19 14:10:07.756 E ns/openshift-ingress pod/router-default-d7494768c-vwcd2 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): r reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 14:07:57.929610       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 14:08:25.938968       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nW0919 14:08:26.358126       1 reflector.go:299] github.com/openshift/router/pkg/router/controller/factory/factory.go:115: watch of *v1.Route ended with: very short watch: github.com/openshift/router/pkg/router/controller/factory/factory.go:115: Unexpected watch close - watch lasted less than a second and no items received\nI0919 14:09:26.475400       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 14:09:31.471804       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 14:09:43.650915       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 14:09:48.645578       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 14:09:53.643286       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 14:09:58.650149       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0919 14:10:03.645515       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Sep 19 14:10:18.910 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=scheduler container exited with code 2 (Error): ollers)\nE0919 13:52:00.086092       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0919 13:52:00.126918       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0919 13:52:00.126987       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)\nE0919 13:52:00.127093       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0919 13:52:00.145558       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0919 13:52:00.145637       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0919 13:52:09.626003       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod 7b5767e7-e559-4bc9-a7d7-72fe889a062b is not added to scheduler cache, so cannot be updated\nE0919 13:52:09.635432       1 eventhandlers.go:316] scheduler cache RemovePod failed: pod 7b5767e7-e559-4bc9-a7d7-72fe889a062b is not found in scheduler cache, so cannot be removed from it\nW0919 14:07:50.169226       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 25442 (37046)\nW0919 14:07:50.275000       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: too old resource version: 25439 (37046)\nW0919 14:07:50.361608       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: too old resource version: 25488 (37047)\nW0919 14:07:50.433990       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSINode ended with: too old resource version: 25486 (37047)\n
Sep 19 14:10:19.094 E ns/openshift-monitoring pod/node-exporter-rr7th node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 9-19T13:52:40Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:40Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:10:19.118 E ns/openshift-controller-manager pod/controller-manager-zp4c7 node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 1 (Error): 
Sep 19 14:10:19.157 E ns/openshift-sdn pod/sdn-controller-twkpf node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=sdn-controller container exited with code 2 (Error): I0919 13:55:19.490231       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Sep 19 14:10:19.193 E ns/openshift-sdn pod/ovs-6vh5l node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=openvswitch container exited with code 1 (Error): 0 s (2 deletes)\n2020-09-19T14:07:22.128Z|00278|connmgr|INFO|br0<->unix#910: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:07:22.193Z|00279|bridge|INFO|bridge br0: deleted interface veth2e72944e on port 24\n2020-09-19T14:07:22.835Z|00280|bridge|INFO|bridge br0: added interface vetha7b59128 on port 35\n2020-09-19T14:07:22.890Z|00281|connmgr|INFO|br0<->unix#914: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T14:07:22.990Z|00282|connmgr|INFO|br0<->unix#919: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-09-19T14:07:22.991Z|00283|connmgr|INFO|br0<->unix#920: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:07:25.751Z|00284|connmgr|INFO|br0<->unix#925: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:07:25.794Z|00285|connmgr|INFO|br0<->unix#928: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:07:25.828Z|00286|bridge|INFO|bridge br0: deleted interface vetha7b59128 on port 35\n2020-09-19T14:07:34.207Z|00287|bridge|INFO|bridge br0: added interface vethe5608a45 on port 36\n2020-09-19T14:07:34.246Z|00288|connmgr|INFO|br0<->unix#935: 5 flow_mods in the last 0 s (5 adds)\n2020-09-19T14:07:34.309Z|00289|connmgr|INFO|br0<->unix#939: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:07:34.310Z|00290|connmgr|INFO|br0<->unix#941: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-09-19T14:07:36.852Z|00291|connmgr|INFO|br0<->unix#946: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:07:36.899Z|00292|connmgr|INFO|br0<->unix#949: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:07:36.952Z|00293|bridge|INFO|bridge br0: deleted interface vethe5608a45 on port 36\n2020-09-19T14:07:42.108Z|00294|connmgr|INFO|br0<->unix#960: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:07:42.139Z|00295|connmgr|INFO|br0<->unix#963: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:07:42.170Z|00296|bridge|INFO|bridge br0: deleted interface vethe48150bc on port 34\n2020-09-19 14:08:25 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Sep 19 14:10:19.257 E ns/openshift-multus pod/multus-admission-controller-lk54l node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=multus-admission-controller container exited with code 255 (Error): 
Sep 19 14:10:19.315 E ns/openshift-multus pod/multus-2l2sn node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 143 (Error): 
Sep 19 14:10:19.408 E ns/openshift-machine-config-operator pod/machine-config-daemon-4nx72 node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 19 14:10:19.423 E ns/openshift-machine-config-operator pod/machine-config-server-x9gjk node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=machine-config-server container exited with code 2 (Error): I0919 14:04:29.599773       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-12-g747de90f-dirty (747de90fbfb379582694160dcc1181734c795695)\nI0919 14:04:29.602202       1 api.go:56] Launching server on :22624\nI0919 14:04:29.602419       1 api.go:56] Launching server on :22623\n
Sep 19 14:10:19.438 E ns/openshift-cluster-node-tuning-operator pod/tuned-q8l8j node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=tuned container exited with code 143 (Error):  instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-09-19 14:07:44,202 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-09-19 14:07:44,207 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-19 14:07:44,208 INFO     tuned.plugins.base: instance disk: assigning devices sda, dm-0\n2020-09-19 14:07:44,210 INFO     tuned.plugins.base: instance net: assigning devices ens4\n2020-09-19 14:07:44,285 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-19 14:07:44,296 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0919 14:07:48.086318  122148 openshift-tuned.go:550] Pod (openshift-cluster-node-tuning-operator/tuned-s2vh7) labels changed node wide: false\nI0919 14:07:48.115669  122148 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-67d76c8b86-wf44z) labels changed node wide: true\nI0919 14:07:48.827445  122148 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:07:48.830050  122148 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:07:48.992755  122148 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 14:07:49.283864  122148 openshift-tuned.go:550] Pod (kube-system/gcp-routes-controller-ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal) labels changed node wide: false\nI0919 14:07:49.283980  122148 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal) labels changed node wide: true\nI0919 14:07:53.829237  122148 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:07:53.832563  122148 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:07:54.255077  122148 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Sep 19 14:10:23.491 E ns/openshift-etcd pod/etcd-member-ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=etcd-metrics container exited with code 2 (Error): 2020-09-19 14:07:54.753305 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 14:07:54.755220 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 14:07:54.755738 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 14:07:54.775481 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/09/19 14:07:54 grpc: addrConn.createTransport failed to connect to {https://etcd-0.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.0.5:9978: connect: connection refused". Reconnecting...\n
Sep 19 14:10:23.537 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-6 container exited with code 1 (Error): shift.io-clientCA-reload ok\n[+]poststarthook/openshift.io-requestheader-reload ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-kubernetes-informers-synched ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nhealthz check failed\nI0919 14:08:26.219740       1 healthz.go:191] [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-discovery-available ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/openshift.io-clientCA-reload ok\n[+]poststarthook/openshift.io-requestheader-reload ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-kubernetes-informers-synched ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nhealthz check failed\n
Sep 19 14:10:23.537 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-cert-syncer-6 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 14:02:00.937642       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:02:00.938121       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 14:02:01.146189       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:02:01.146518       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Sep 19 14:10:23.537 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-insecure-readyz-6 container exited with code 2 (Error): o get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:51:38.818813       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:51:39.052562       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:51:39.296398       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:51:43.493453       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:51:43.819779       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:51:44.052766       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:51:44.298244       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:51:48.495649       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:51:48.821836       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:51:49.053705       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:51:49.300139       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\n
Sep 19 14:10:26.121 E ns/openshift-multus pod/multus-2l2sn node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal invariant violation: pod may not transition Running->Pending
Sep 19 14:10:28.226 E ns/openshift-multus pod/multus-2l2sn node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal invariant violation: pod may not transition Running->Pending
Sep 19 14:10:30.267 E ns/openshift-machine-config-operator pod/machine-config-daemon-4nx72 node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 19 14:10:30.302 E ns/openshift-multus pod/multus-2l2sn node/ci-op-ftt6b-m-0.c.openshift-gce-devel-ci.internal invariant violation: pod may not transition Running->Pending
Sep 19 14:10:42.033 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Sep 19 14:11:53.924 E kube-apiserver failed contacting the API: Get https://api.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=40316&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp 35.231.2.27:6443: connect: connection refused
Sep 19 14:12:40.411 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:12:52.061 E ns/openshift-monitoring pod/node-exporter-t98m9 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 9-19T13:51:55Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T13:51:55Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:12:52.170 E ns/openshift-multus pod/multus-rxl6z node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 143 (Error): 
Sep 19 14:12:52.208 E ns/openshift-sdn pod/ovs-ch52d node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=openvswitch container exited with code 1 (Error):  the last 0 s (4 deletes)\n2020-09-19T14:10:06.954Z|00174|bridge|INFO|bridge br0: deleted interface veth687162d2 on port 20\n2020-09-19T14:10:07.008Z|00175|connmgr|INFO|br0<->unix#774: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:10:07.089Z|00176|connmgr|INFO|br0<->unix#777: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:10:07.135Z|00177|bridge|INFO|bridge br0: deleted interface veth8289fa7e on port 19\n2020-09-19T14:10:07.189Z|00178|connmgr|INFO|br0<->unix#780: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:10:07.254Z|00179|connmgr|INFO|br0<->unix#783: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:10:07.288Z|00180|bridge|INFO|bridge br0: deleted interface vethb1a43ed9 on port 3\n2020-09-19T14:10:07.348Z|00181|connmgr|INFO|br0<->unix#786: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:10:07.387Z|00182|connmgr|INFO|br0<->unix#789: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:10:07.419Z|00183|bridge|INFO|bridge br0: deleted interface veth17e6ebf9 on port 17\n2020-09-19T14:10:35.598Z|00184|connmgr|INFO|br0<->unix#811: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:10:35.631Z|00185|connmgr|INFO|br0<->unix#814: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:10:35.662Z|00186|bridge|INFO|bridge br0: deleted interface veth05ee690f on port 11\n2020-09-19T14:10:35.702Z|00187|connmgr|INFO|br0<->unix#817: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:10:35.744Z|00188|connmgr|INFO|br0<->unix#820: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:10:35.776Z|00189|bridge|INFO|bridge br0: deleted interface veth5debda52 on port 15\n2020-09-19T14:10:50.851Z|00190|connmgr|INFO|br0<->unix#835: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:10:50.884Z|00191|connmgr|INFO|br0<->unix#838: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:10:50.913Z|00192|bridge|INFO|bridge br0: deleted interface vethd877ca0f on port 12\n2020-09-19 14:11:32 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Sep 19 14:12:52.283 E ns/openshift-machine-config-operator pod/machine-config-daemon-ztkc7 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 19 14:12:52.296 E ns/openshift-cluster-node-tuning-operator pod/tuned-9gz7z node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=tuned container exited with code 143 (Error): penshift-tuned.go:390] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0919 14:11:10.237616  127722 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:11:10.397696  127722 openshift-tuned.go:635] Active profile () != recommended profile (openshift-node)\nI0919 14:11:10.397747  127722 openshift-tuned.go:263] Starting tuned...\n2020-09-19 14:11:10,551 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-09-19 14:11:10,559 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-09-19 14:11:10,560 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-09-19 14:11:10,562 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-09-19 14:11:10,563 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-09-19 14:11:10,613 INFO     tuned.daemon.controller: starting controller\n2020-09-19 14:11:10,613 INFO     tuned.daemon.daemon: starting tuning\n2020-09-19 14:11:10,621 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-09-19 14:11:10,622 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-09-19 14:11:10,626 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-19 14:11:10,629 INFO     tuned.plugins.base: instance disk: assigning devices sda, dm-0\n2020-09-19 14:11:10,630 INFO     tuned.plugins.base: instance net: assigning devices ens4\n2020-09-19 14:11:10,732 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-19 14:11:10,746 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n2020-09-19 14:11:32,705 INFO     tuned.daemon.controller: terminating controller\n2020-09-19 14:11:32,706 INFO     tuned.daemon.daemon: stopping tuning\nI0919 14:11:32.706095  127722 openshift-tuned.go:137] Received signal: terminated\nI0919 14:11:32.706165  127722 openshift-tuned.go:304] Sending TERM to PID 127883\n
Sep 19 14:12:58.733 E ns/openshift-machine-config-operator pod/machine-config-daemon-ztkc7 node/ci-op-ftt6b-w-c-9sv7n.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 19 14:13:09.980 E ns/openshift-marketplace pod/redhat-operators-99bd8d76d-vjsdd node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=redhat-operators container exited with code 2 (Error): 
Sep 19 14:13:10.040 E ns/openshift-marketplace pod/community-operators-7fb89c8b54-tg7gh node/ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 2 (Error): 
Sep 19 14:13:42.716 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=cluster-policy-controller-6 container exited with code 1 (Error): mers/factory.go:133: watch of *v1.Role ended with: too old resource version: 28301 (40144)\nW0919 14:11:18.097640       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.CronJob ended with: too old resource version: 28299 (40144)\nW0919 14:11:18.109433       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Namespace ended with: too old resource version: 28298 (40144)\nW0919 14:11:18.127956       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.LimitRange ended with: too old resource version: 28298 (40144)\nW0919 14:11:18.128041       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 33968 (40144)\nW0919 14:11:18.197198       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.Ingress ended with: too old resource version: 28300 (40144)\nW0919 14:11:18.218222       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.RoleBinding ended with: too old resource version: 28301 (40144)\nW0919 14:11:18.230904       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ServiceAccount ended with: too old resource version: 33967 (40144)\nW0919 14:11:18.230997       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.PodTemplate ended with: too old resource version: 33968 (40144)\nW0919 14:11:18.231023       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.Ingress ended with: too old resource version: 33967 (40144)\nW0919 14:11:18.231043       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.HorizontalPodAutoscaler ended with: too old resource version: 28299 (40144)\nE0919 14:11:53.559508       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\n
Sep 19 14:13:42.716 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:10:37.276780       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:10:37.277261       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:10:47.292585       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:10:47.292893       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:10:57.322338       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:10:57.323156       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:11:07.333109       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:11:07.333867       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:11:17.354328       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:11:17.354981       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:11:27.365139       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:11:27.365591       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:11:37.381756       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:11:37.382167       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:11:47.394974       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:11:47.395844       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Sep 19 14:13:42.716 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=kube-controller-manager-6 container exited with code 2 (Error): yncing deployment openshift-marketplace/community-operators: Operation cannot be fulfilled on deployments.apps "community-operators": the object has been modified; please apply your changes to the latest version and try again\nI0919 14:11:23.034980       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"community-operators-9fc8b49b8", UID:"95715de2-08d7-47be-b975-552a82c22cfe", APIVersion:"apps/v1", ResourceVersion:"40266", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: community-operators-9fc8b49b8-5fdgt\nI0919 14:11:29.273455       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/telemeter-client: Operation cannot be fulfilled on deployments.apps "telemeter-client": the object has been modified; please apply your changes to the latest version and try again\nI0919 14:11:34.444560       1 event.go:255] Event(v1.ObjectReference{Kind:"Service", Namespace:"e2e-k8s-service-lb-available-8809", Name:"service-test", UID:"bfad882b-1ce4-41ab-a972-015862288c95", APIVersion:"v1", ResourceVersion:"20529", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts\nI0919 14:11:34.680459       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/prometheus-adapter: Operation cannot be fulfilled on deployments.apps "prometheus-adapter": the object has been modified; please apply your changes to the latest version and try again\nI0919 14:11:35.273701       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/thanos-querier: Operation cannot be fulfilled on deployments.apps "thanos-querier": the object has been modified; please apply your changes to the latest version and try again\nI0919 14:11:41.110523       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/grafana: Operation cannot be fulfilled on deployments.apps "grafana": the object has been modified; please apply your changes to the latest version and try again\n
Sep 19 14:13:42.780 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=scheduler container exited with code 2 (Error): too old resource version: 33968 (40144)\nI0919 14:11:21.653499       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6bd795cbf5-pmm6f: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0919 14:11:22.559304       1 scheduler.go:667] pod openshift-marketplace/redhat-operators-78845f478f-nl4jt is bound successfully on node "ci-op-ftt6b-w-d-bgj8j.c.openshift-gce-devel-ci.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<15387064Ki>|Pods<250>|StorageEphemeral<133665772Ki>; Allocatable: CPU<3500m>|Memory<14236088Ki>|Pods<250>|StorageEphemeral<122112633448>.".\nI0919 14:11:22.881937       1 scheduler.go:667] pod openshift-marketplace/certified-operators-fdf84c9b-s4pfs is bound successfully on node "ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<15387276Ki>|Pods<250>|StorageEphemeral<133665772Ki>; Allocatable: CPU<3500m>|Memory<14236300Ki>|Pods<250>|StorageEphemeral<122112633448>.".\nI0919 14:11:23.059002       1 scheduler.go:667] pod openshift-marketplace/community-operators-9fc8b49b8-5fdgt is bound successfully on node "ci-op-ftt6b-w-b-49slf.c.openshift-gce-devel-ci.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<15387276Ki>|Pods<250>|StorageEphemeral<133665772Ki>; Allocatable: CPU<3500m>|Memory<14236300Ki>|Pods<250>|StorageEphemeral<122112633448>.".\nI0919 14:11:31.663624       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6bd795cbf5-pmm6f: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Sep 19 14:13:42.879 E ns/openshift-monitoring pod/node-exporter-42hgq node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 9-19T13:52:48Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T13:52:48Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:13:42.905 E ns/openshift-controller-manager pod/controller-manager-62gp9 node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 1 (Error): 
Sep 19 14:13:42.926 E ns/openshift-sdn pod/sdn-controller-659w4 node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=sdn-controller container exited with code 2 (Error): conds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-09-19T13:21:27Z\",\"renewTime\":\"2020-09-19T13:55:27Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal became leader'\nI0919 13:55:27.762378       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0919 13:55:27.768031       1 master.go:51] Initializing SDN master\nI0919 13:55:27.818052       1 network_controller.go:60] Started OpenShift Network Controller\nW0919 14:04:30.394776       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 28359 (33969)\nW0919 14:04:30.402018       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 28360 (33969)\nW0919 14:11:18.098604       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 28298 (40144)\nW0919 14:11:18.098798       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 33969 (40144)\nW0919 14:11:18.227835       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 33969 (40144)\n
Sep 19 14:13:42.967 E ns/openshift-multus pod/multus-admission-controller-df2qn node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=multus-admission-controller container exited with code 255 (Error): 
Sep 19 14:13:43.055 E ns/openshift-multus pod/multus-t48jh node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 143 (Error): 
Sep 19 14:13:43.078 E ns/openshift-machine-config-operator pod/machine-config-daemon-qbv4j node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 19 14:13:43.164 E ns/openshift-machine-config-operator pod/machine-config-server-h979t node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=machine-config-server container exited with code 2 (Error): I0919 14:04:17.688777       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-12-g747de90f-dirty (747de90fbfb379582694160dcc1181734c795695)\nI0919 14:04:17.690666       1 api.go:56] Launching server on :22624\nI0919 14:04:17.690707       1 api.go:56] Launching server on :22623\n
Sep 19 14:13:43.195 E ns/openshift-cluster-node-tuning-operator pod/tuned-xl8qj node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=tuned container exited with code 143 (Error): 10,595 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-09-19 14:11:10,596 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-09-19 14:11:10,597 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-09-19 14:11:10,640 INFO     tuned.daemon.controller: starting controller\n2020-09-19 14:11:10,640 INFO     tuned.daemon.daemon: starting tuning\n2020-09-19 14:11:10,646 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-09-19 14:11:10,647 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-09-19 14:11:10,651 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-19 14:11:10,653 INFO     tuned.plugins.base: instance disk: assigning devices sda, dm-0\n2020-09-19 14:11:10,654 INFO     tuned.plugins.base: instance net: assigning devices ens4\n2020-09-19 14:11:10,728 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-19 14:11:10,737 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0919 14:11:14.384632  140736 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-7-ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal) labels changed node wide: false\nI0919 14:11:16.326679  140736 openshift-tuned.go:550] Pod (openshift-cluster-node-tuning-operator/tuned-xzhn6) labels changed node wide: false\nI0919 14:11:16.364967  140736 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-77d7cf9f6f-zq7qk) labels changed node wide: true\nI0919 14:11:20.281569  140736 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:11:20.286200  140736 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:11:20.523579  140736 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Sep 19 14:13:45.703 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=kube-apiserver-6 container exited with code 1 (Error): erseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=1501, ErrCode=NO_ERROR, debug=""\nI0919 14:11:53.582422       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=1501, ErrCode=NO_ERROR, debug=""\nI0919 14:11:53.582594       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=1501, ErrCode=NO_ERROR, debug=""\nI0919 14:11:53.582704       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=1501, ErrCode=NO_ERROR, debug=""\nI0919 14:11:53.612306       1 healthz.go:191] [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-discovery-available ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/openshift.io-kubernetes-informers-synched ok\n[+]poststarthook/openshift.io-clientCA-reload ok\n[+]poststarthook/openshift.io-requestheader-reload ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nhealthz check failed\n
Sep 19 14:13:45.703 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=kube-apiserver-cert-syncer-6 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 14:04:17.333726       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:04:17.334290       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 14:04:17.542621       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:04:17.544197       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Sep 19 14:13:45.703 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=kube-apiserver-insecure-readyz-6 container exited with code 2 (Error): o get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:53:57.827788       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:53:57.951601       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:53:59.069532       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:54:00.768543       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:54:02.828457       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:54:02.952003       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:54:04.069920       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:54:05.770288       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:54:07.829733       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:54:07.953003       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\nW0919 13:54:09.071397       1 readyz.go:83] Failed to get "https://localhost:6443/readyz": Get https://localhost:6443/readyz: dial tcp [::1]:6443: connect: connection refused\n
Sep 19 14:13:46.887 E ns/openshift-multus pod/multus-t48jh node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal invariant violation: pod may not transition Running->Pending
Sep 19 14:13:47.102 E ns/openshift-etcd pod/etcd-member-ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=etcd-metrics container exited with code 2 (Error): 2020-09-19 14:11:22.121687 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 14:11:22.123202 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 14:11:22.123880 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 14:11:22.127259 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/09/19 14:11:22 grpc: addrConn.createTransport failed to connect to {https://etcd-1.ci-op-ciq9hg0f-5d1b5.origin-ci-int-gce.dev.openshift.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.0.3:9978: connect: connection refused". Reconnecting...\n
Sep 19 14:13:49.539 E ns/openshift-monitoring pod/node-exporter-42hgq node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal invariant violation: pod may not transition Running->Pending
Sep 19 14:13:49.648 E ns/openshift-multus pod/multus-t48jh node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal invariant violation: pod may not transition Running->Pending
Sep 19 14:13:52.095 E ns/openshift-machine-config-operator pod/machine-config-daemon-qbv4j node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 19 14:13:55.322 E ns/openshift-multus pod/multus-t48jh node/ci-op-ftt6b-m-1.c.openshift-gce-devel-ci.internal invariant violation: pod may not transition Running->Pending