ResultSUCCESS
Tests 1 failed / 21 succeeded
Started2020-09-19 14:12
Elapsed1h7m
Work namespaceci-op-55jkc77t
pod1fb9632d-fa82-11ea-a1fd-0a580a800db2
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 34m9s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
228 error level events were detected during this test run:

Sep 19 14:37:45.188 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-84496c5d56-k78qz node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-operator container exited with code 255 (Error): changed: Progressing changed from True to False ("Progressing: 3 nodes are at revision 6"),Available message changed from "Available: 3 nodes are active; 1 nodes are at revision 2; 2 nodes are at revision 6" to "Available: 3 nodes are active; 3 nodes are at revision 6"\nI0919 14:36:04.795583       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"6cbe9916-fa83-11ea-94fb-42010a000004", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-6 -n openshift-kube-apiserver: cause by changes in data.status\nI0919 14:36:06.016061       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"6cbe9916-fa83-11ea-94fb-42010a000004", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "StaticPodsDegraded: nodes/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal pods/kube-apiserver-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=\"kube-apiserver-6\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"\nI0919 14:36:09.000511       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"6cbe9916-fa83-11ea-94fb-42010a000004", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-6-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal -n openshift-kube-apiserver because it was missing\nI0919 14:37:44.429524       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 14:37:44.429688       1 leaderelection.go:66] leaderelection lost\nF0919 14:37:44.463527       1 builder.go:217] server exited\n
Sep 19 14:39:11.443 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-68b5c85cb6-tl248 node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-controller-manager-operator container exited with code 255 (Error):  of *v1.Secret ended with: too old resource version: 13813 (15178)\nW0919 14:35:42.253712       1 reflector.go:289] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 5211 (15231)\nW0919 14:35:42.253767       1 reflector.go:289] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 5211 (15211)\nW0919 14:35:42.298395       1 reflector.go:289] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 5211 (15216)\nW0919 14:35:42.298822       1 reflector.go:289] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 5211 (15216)\nW0919 14:35:42.298873       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Service ended with: too old resource version: 14495 (15178)\nW0919 14:35:42.298925       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 15170 (15806)\nW0919 14:35:42.298953       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ServiceAccount ended with: too old resource version: 9308 (15178)\nW0919 14:35:42.298984       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 15170 (15806)\nW0919 14:35:42.310354       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Secret ended with: too old resource version: 14449 (15178)\nW0919 14:35:42.310495       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Namespace ended with: too old resource version: 9806 (15178)\nI0919 14:39:10.551260       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 14:39:10.551332       1 leaderelection.go:66] leaderelection lost\n
Sep 19 14:41:28.430 E ns/openshift-cluster-node-tuning-operator pod/tuned-d7k9h node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=tuned container exited with code 143 (Error): ens4\n2020-09-19 14:34:13,581 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-19 14:34:13,583 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0919 14:36:17.381544   22841 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-replicaset-upgrade-7457/rs-wd9hs) labels changed node wide: true\nI0919 14:36:18.110342   22841 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:36:18.113815   22841 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:36:18.282858   22841 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:36:31.522062   22841 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-3992/dp-9fcb69c69-sxc96) labels changed node wide: true\nI0919 14:36:33.110399   22841 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:36:33.113914   22841 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:36:33.249357   22841 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:36:43.692563   22841 openshift-tuned.go:550] Pod (e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-8235/pod-configmap-7a200497-fa85-11ea-b1fc-0a58ac103748) labels changed node wide: false\nI0919 14:37:03.510899   22841 openshift-tuned.go:550] Pod (e2e-k8s-service-upgrade-9061/service-test-d5w5n) labels changed node wide: true\nI0919 14:37:08.110435   22841 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:37:08.114054   22841 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:37:08.259036   22841 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nE0919 14:40:50.632140   22841 openshift-tuned.go:881] Pod event watch channel closed.\nI0919 14:40:50.632191   22841 openshift-tuned.go:883] Increasing resyncPeriod to 108\n
Sep 19 14:41:28.453 E ns/openshift-cluster-node-tuning-operator pod/tuned-jmgcn node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=tuned container exited with code 143 (Error): ed\nI0919 14:36:17.508310   23680 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-daemonset-upgrade-8342/ds1-8cm6l) labels changed node wide: true\nI0919 14:36:18.046418   23680 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:36:18.049142   23680 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:36:18.211721   23680 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:36:33.728260   23680 openshift-tuned.go:550] Pod (e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-7494/pod-secrets-7a2c9b24-fa85-11ea-b1fc-0a58ac103748) labels changed node wide: false\nI0919 14:36:40.758093   23680 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-3992/dp-6f48f47956-hm6dc) labels changed node wide: true\nI0919 14:36:43.046509   23680 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:36:43.049523   23680 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:36:43.186418   23680 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:36:43.797604   23680 openshift-tuned.go:550] Pod (e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-7494/pod-secrets-7a2c9b24-fa85-11ea-b1fc-0a58ac103748) labels changed node wide: false\nI0919 14:37:03.525924   23680 openshift-tuned.go:550] Pod (e2e-k8s-service-upgrade-9061/service-test-2s8kq) labels changed node wide: true\nI0919 14:37:08.046474   23680 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:37:08.050402   23680 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:37:08.173175   23680 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nE0919 14:40:50.616218   23680 openshift-tuned.go:881] Pod event watch channel closed.\nI0919 14:40:50.616247   23680 openshift-tuned.go:883] Increasing resyncPeriod to 130\n
Sep 19 14:41:28.920 E ns/openshift-machine-api pod/machine-api-controllers-5657cd484d-w4nhc node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 1 (Error): 
Sep 19 14:41:28.948 E ns/openshift-cluster-node-tuning-operator pod/tuned-qgtnk node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=tuned container exited with code 143 (Error): d profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 14:37:34.852166    7503 openshift-tuned.go:550] Pod (openshift-cluster-version/cluster-version-operator-6b59cfdbd4-wvc7b) labels changed node wide: true\nI0919 14:37:38.211002    7503 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:37:38.213443    7503 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:37:38.367773    7503 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 14:39:14.869944    7503 openshift-tuned.go:550] Pod (openshift-kube-controller-manager-operator/kube-controller-manager-operator-69b575cf4b-d2c8g) labels changed node wide: true\nI0919 14:39:18.211012    7503 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:39:18.213798    7503 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:39:18.326765    7503 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 14:40:38.062014    7503 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-7-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal) labels changed node wide: false\nI0919 14:40:47.979352    7503 openshift-tuned.go:550] Pod (openshift-kube-scheduler/openshift-kube-scheduler-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal) labels changed node wide: true\nI0919 14:40:48.211047    7503 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:40:48.213930    7503 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:40:48.372472    7503 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nE0919 14:40:50.638814    7503 openshift-tuned.go:881] Pod event watch channel closed.\nI0919 14:40:50.638837    7503 openshift-tuned.go:883] Increasing resyncPeriod to 134\n
Sep 19 14:42:59.443 E ns/openshift-cluster-machine-approver pod/machine-approver-755bc844b6-zsc2x node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=machine-approver-controller container exited with code 2 (Error): sr-vhwsn approved\nI0919 14:26:17.498877       1 main.go:139] CSR csr-ngw5x added\nI0919 14:26:17.513536       1 csr_check.go:415] retrieving serving cert from ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal (10.0.32.3:10250)\nW0919 14:26:17.515525       1 csr_check.go:175] Failed to retrieve current serving cert: remote error: tls: internal error\nI0919 14:26:17.515643       1 csr_check.go:180] Falling back to machine-api authorization for ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal\nI0919 14:26:17.521574       1 main.go:189] CSR csr-ngw5x approved\nI0919 14:33:46.081282       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0919 14:33:46.081762       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:231: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=6026&timeoutSeconds=499&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 14:33:47.082438       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:231: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 14:33:48.083945       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:231: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 14:40:50.621746       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:231: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=15178&timeoutSeconds=427&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\n
Sep 19 14:43:15.380 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-5b88fb4bc9-f8jc9 node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=cluster-node-tuning-operator container exited with code 255 (Error): t\nI0919 14:42:04.276751       1 status.go:25] syncOperatorStatus()\nI0919 14:42:04.289424       1 tuned_controller.go:187] syncServiceAccount()\nI0919 14:42:04.289586       1 tuned_controller.go:214] syncClusterRole()\nI0919 14:42:04.320624       1 tuned_controller.go:245] syncClusterRoleBinding()\nI0919 14:42:04.354875       1 tuned_controller.go:276] syncClusterConfigMap()\nI0919 14:42:04.361091       1 tuned_controller.go:276] syncClusterConfigMap()\nI0919 14:42:04.365312       1 tuned_controller.go:313] syncDaemonSet()\nI0919 14:42:04.540465       1 tuned_controller.go:432] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0919 14:42:04.540576       1 status.go:25] syncOperatorStatus()\nI0919 14:42:04.551458       1 tuned_controller.go:187] syncServiceAccount()\nI0919 14:42:04.551597       1 tuned_controller.go:214] syncClusterRole()\nI0919 14:42:04.588029       1 tuned_controller.go:245] syncClusterRoleBinding()\nI0919 14:42:04.620489       1 tuned_controller.go:276] syncClusterConfigMap()\nI0919 14:42:04.625742       1 tuned_controller.go:276] syncClusterConfigMap()\nI0919 14:42:04.630186       1 tuned_controller.go:313] syncDaemonSet()\nI0919 14:42:04.636528       1 tuned_controller.go:432] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0919 14:42:04.636550       1 status.go:25] syncOperatorStatus()\nI0919 14:42:04.645344       1 tuned_controller.go:187] syncServiceAccount()\nI0919 14:42:04.645458       1 tuned_controller.go:214] syncClusterRole()\nI0919 14:42:04.678061       1 tuned_controller.go:245] syncClusterRoleBinding()\nI0919 14:42:04.711397       1 tuned_controller.go:276] syncClusterConfigMap()\nI0919 14:42:04.716294       1 tuned_controller.go:276] syncClusterConfigMap()\nI0919 14:42:04.816483       1 tuned_controller.go:313] syncDaemonSet()\nW0919 14:42:36.092382       1 reflector.go:289] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: watch of *v1.ConfigMap ended with: too old resource version: 19173 (19251)\nF0919 14:43:12.473906       1 main.go:82] <nil>\n
Sep 19 14:43:18.225 E ns/openshift-cluster-samples-operator pod/cluster-samples-operator-7d448cd5b4-whc5g node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=cluster-samples-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:43:18.619 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-59ff6cffd8-f4hf7 node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:43:19.209 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-8655nk7sz node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:43:20.188 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2020/09/19 14:31:28 Watching directory: "/etc/alertmanager/config"\n
Sep 19 14:43:20.188 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/19 14:31:28 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 14:31:28 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 14:31:28 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 14:31:28 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 14:31:28 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 14:31:28 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 14:31:28 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 14:31:28 http.go:106: HTTPS: listening on [::]:9095\n
Sep 19 14:43:22.733 E ns/openshift-monitoring pod/openshift-state-metrics-76db7996df-mmlj2 node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=openshift-state-metrics container exited with code 2 (Error): 
Sep 19 14:43:24.582 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Could not update deployment "openshift-authentication-operator/authentication-operator" (134 of 435)\n* Could not update deployment "openshift-console/downloads" (291 of 435)\n* Could not update deployment "openshift-image-registry/cluster-image-registry-operator" (169 of 435)\n* Could not update deployment "openshift-marketplace/marketplace-operator" (345 of 435)\n* Could not update deployment "openshift-operator-lifecycle-manager/catalog-operator" (325 of 435)
Sep 19 14:43:25.233 E ns/openshift-monitoring pod/kube-state-metrics-54754b9bb7-rxlcn node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=kube-state-metrics container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:43:25.233 E ns/openshift-monitoring pod/kube-state-metrics-54754b9bb7-rxlcn node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy-self container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:43:25.233 E ns/openshift-monitoring pod/kube-state-metrics-54754b9bb7-rxlcn node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy-main container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:43:36.978 E ns/openshift-image-registry pod/node-ca-t4978 node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:43:37.528 E ns/openshift-ingress pod/router-default-76df987cb4-txlzp node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): ttp://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:42:25.526252       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:42:43.064282       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nE0919 14:42:45.962137       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""\nE0919 14:42:45.962628       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""\nE0919 14:42:45.962796       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""\nI0919 14:42:55.509832       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:43:00.506464       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:43:05.509627       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:43:10.505128       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:43:15.507648       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:43:20.507444       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:43:25.520967       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:43:30.508895       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Sep 19 14:43:38.802 E ns/openshift-monitoring pod/node-exporter-fr4q8 node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 20-09-19T14:30:28Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:28Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:43:43.297 E ns/openshift-apiserver pod/apiserver-x6sf6 node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=openshift-apiserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:43:44.904 E ns/openshift-monitoring pod/node-exporter-llq4m node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 20-09-19T14:30:29Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:43:47.312 E ns/openshift-monitoring pod/telemeter-client-6cc59bcdbf-v2kfg node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=telemeter-client container exited with code 2 (Error): 
Sep 19 14:43:47.312 E ns/openshift-monitoring pod/telemeter-client-6cc59bcdbf-v2kfg node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=reload container exited with code 2 (Error): 
Sep 19 14:43:58.003 E ns/openshift-console pod/downloads-7ddc778c45-twsq4 node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:43:58.347 E ns/openshift-image-registry pod/node-ca-r9xcr node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:44:04.361 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): =info ts=2020-09-19T14:44:00.136Z caller=main.go:332 fd_limits="(soft=1048576, hard=1048576)"\nlevel=info ts=2020-09-19T14:44:00.136Z caller=main.go:333 vm_limits="(soft=unlimited, hard=unlimited)"\nlevel=info ts=2020-09-19T14:44:00.138Z caller=main.go:652 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T14:44:00.138Z caller=web.go:448 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T14:44:00.144Z caller=main.go:667 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T14:44:00.144Z caller=main.go:668 msg="TSDB started"\nlevel=info ts=2020-09-19T14:44:00.144Z caller=main.go:738 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T14:44:00.144Z caller=main.go:521 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T14:44:00.144Z caller=main.go:535 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T14:44:00.144Z caller=main.go:557 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T14:44:00.144Z caller=main.go:531 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T14:44:00.144Z caller=main.go:517 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T14:44:00.144Z caller=main.go:551 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T14:44:00.144Z caller=manager.go:776 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T14:44:00.145Z caller=manager.go:782 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T14:44:00.147Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T14:44:00.147Z caller=main.go:722 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19T14:44:00.148Z caller=main.go:731 err="error loading config from \"/etc/prometheus/config_out/prometheus.env.yaml\": couldn't load configuration (--config.file=\"/etc/prometheus/config_out/prometheus.env.yaml\"): open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 19 14:44:07.965 E ns/openshift-monitoring pod/prometheus-adapter-d6b7bf6f5-rfnlw node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I0919 14:30:45.779129       1 adapter.go:93] successfully using in-cluster auth\nI0919 14:30:46.234728       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 19 14:44:09.550 E ns/openshift-operator-lifecycle-manager pod/packageserver-7b76988fc6-nc59t node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:44:22.614 E ns/openshift-controller-manager pod/controller-manager-nfxc9 node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:44:26.145 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): =info ts=2020-09-19T14:44:20.724Z caller=main.go:332 fd_limits="(soft=1048576, hard=1048576)"\nlevel=info ts=2020-09-19T14:44:20.724Z caller=main.go:333 vm_limits="(soft=unlimited, hard=unlimited)"\nlevel=info ts=2020-09-19T14:44:20.728Z caller=main.go:652 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T14:44:20.739Z caller=main.go:667 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T14:44:20.739Z caller=main.go:668 msg="TSDB started"\nlevel=info ts=2020-09-19T14:44:20.739Z caller=main.go:738 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T14:44:20.739Z caller=main.go:521 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T14:44:20.739Z caller=main.go:535 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T14:44:20.739Z caller=main.go:557 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T14:44:20.739Z caller=main.go:531 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T14:44:20.740Z caller=main.go:517 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T14:44:20.740Z caller=manager.go:776 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T14:44:20.740Z caller=manager.go:782 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T14:44:20.740Z caller=web.go:448 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T14:44:20.740Z caller=main.go:551 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T14:44:20.747Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T14:44:20.747Z caller=main.go:722 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19T14:44:20.747Z caller=main.go:731 err="error loading config from \"/etc/prometheus/config_out/prometheus.env.yaml\": couldn't load configuration (--config.file=\"/etc/prometheus/config_out/prometheus.env.yaml\"): open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 19 14:45:00.589 E ns/openshift-marketplace pod/certified-operators-79cb96d6bc-q5wnv node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 
Sep 19 14:45:05.251 E ns/openshift-marketplace pod/community-operators-674bf8f8f8-fv5rr node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 2 (Error): 
Sep 19 14:45:16.322 E ns/openshift-image-registry pod/node-ca-58q68 node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:45:17.527 E ns/openshift-monitoring pod/node-exporter-24mw9 node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 20-09-19T14:30:37Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:37Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:45:19.126 E ns/openshift-authentication pod/oauth-openshift-5c64cb76d9-88q9b node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:45:20.323 E ns/openshift-service-ca pod/configmap-cabundle-injector-5587d9f5cf-rlqfm node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=configmap-cabundle-injector-controller container exited with code 255 (Error): 
Sep 19 14:45:22.921 E ns/openshift-service-ca pod/apiservice-cabundle-injector-56c7b7974f-x59zp node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Sep 19 14:45:26.649 E ns/openshift-monitoring pod/node-exporter-zd9c8 node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 20-09-19T14:30:29Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:45:42.331 E ns/openshift-monitoring pod/node-exporter-954zn node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 143 (Error): 20-09-19T14:30:29Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T14:30:29Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:45:56.072 E ns/openshift-console pod/console-7f489b69fc-s5zkv node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=console container exited with code 2 (Error): 2020/09/19 14:33:16 cmd/main: cookies are secure!\n2020/09/19 14:33:16 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/19 14:33:26 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/19 14:33:36 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/19 14:33:46 cmd/main: Binding to 0.0.0.0:8443...\n2020/09/19 14:33:46 cmd/main: using TLS\n
Sep 19 14:47:21.109 E ns/openshift-network-operator pod/network-operator-6c955d77d4-sr5lh node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=network-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:47:32.134 E ns/openshift-multus pod/multus-admission-controller-85zfp node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=multus-admission-controller container exited with code 2 (Error): 
Sep 19 14:47:44.189 E ns/openshift-sdn pod/sdn-gthlk node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): 47:28.715862    5357 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:47:28.715884    5357 proxier.go:346] userspace syncProxyRules took 75.678553ms\nI0919 14:47:28.715892    5357 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:47:31.150974    5357 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.9:6443 10.130.0.7:6443]\nI0919 14:47:31.151130    5357 roundrobin.go:240] Delete endpoint 10.128.0.15:6443 for service "openshift-multus/multus-admission-controller:"\nI0919 14:47:31.151239    5357 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:47:31.330719    5357 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:47:31.427960    5357 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:47:31.427989    5357 proxier.go:346] userspace syncProxyRules took 97.244833ms\nI0919 14:47:31.428001    5357 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:47:39.965150    5357 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.0.2:9101 10.0.0.3:9101 10.0.0.5:9101 10.0.32.2:9101 10.0.32.3:9101]\nI0919 14:47:39.965189    5357 roundrobin.go:240] Delete endpoint 10.0.32.4:9101 for service "openshift-sdn/sdn:metrics"\nI0919 14:47:39.965239    5357 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:47:40.118520    5357 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:47:40.202465    5357 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:47:40.202491    5357 proxier.go:346] userspace syncProxyRules took 83.948834ms\nI0919 14:47:40.202501    5357 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:47:43.291215    5357 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0919 14:47:43.291257    5357 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 19 14:47:45.452 E ns/openshift-sdn pod/sdn-controller-bmjlt node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=sdn-controller container exited with code 2 (Error): I0919 14:21:50.889235       1 leaderelection.go:217] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Sep 19 14:47:47.447 E ns/openshift-multus pod/multus-admission-controller-lgngp node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=multus-admission-controller container exited with code 2 (Error): 
Sep 19 14:47:58.265 E ns/openshift-sdn pod/sdn-controller-r2znv node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=sdn-controller container exited with code 2 (Error): ig-storage-sig-api-machinery-secret-upgrade-7494"\nE0919 14:40:50.583477       1 reflector.go:270] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.NetNamespace: Get https://api-int.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:6443/apis/network.openshift.io/v1/netnamespaces?resourceVersion=16284&timeout=5m32s&timeoutSeconds=332&watch=true: dial tcp 35.237.0.38:6443: connect: connection refused\nE0919 14:40:50.590739       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Node: Get https://api-int.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:6443/api/v1/nodes?resourceVersion=18308&timeout=7m40s&timeoutSeconds=460&watch=true: dial tcp 35.237.0.38:6443: connect: connection refused\nE0919 14:40:50.590849       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Namespace: Get https://api-int.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces?resourceVersion=16436&timeout=6m38s&timeoutSeconds=398&watch=true: dial tcp 35.237.0.38:6443: connect: connection refused\nE0919 14:40:50.591161       1 reflector.go:270] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.HostSubnet: Get https://api-int.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:6443/apis/network.openshift.io/v1/hostsubnets?resourceVersion=15236&timeout=8m11s&timeoutSeconds=491&watch=true: dial tcp 35.237.0.38:6443: connect: connection refused\nE0919 14:40:56.407313       1 reflector.go:126] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to list *v1.NetNamespace: the server could not find the requested resource (get netnamespaces.network.openshift.io)\nE0919 14:40:56.582152       1 reflector.go:126] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to list *v1.HostSubnet: the server could not find the requested resource (get hostsubnets.network.openshift.io)\n
Sep 19 14:47:59.646 E ns/openshift-multus pod/multus-nm94k node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 14:48:12.895 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:48:12.961 E ns/openshift-sdn pod/sdn-lj84l node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): yRules start\nI0919 14:48:04.972371    5081 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:48:05.047849    5081 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:48:05.047875    5081 proxier.go:346] userspace syncProxyRules took 75.480441ms\nI0919 14:48:05.047886    5081 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:48:06.711006    5081 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.0.2:9101 10.0.0.3:9101 10.0.0.5:9101 10.0.32.2:9101 10.0.32.3:9101 10.0.32.4:9101]\nI0919 14:48:06.711157    5081 roundrobin.go:240] Delete endpoint 10.0.0.5:9101 for service "openshift-sdn/sdn:metrics"\nI0919 14:48:06.711382    5081 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:48:06.731103    5081 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.0.2:9101 10.0.0.3:9101 10.0.0.5:9101 10.0.32.2:9101 10.0.32.4:9101]\nI0919 14:48:06.731217    5081 roundrobin.go:240] Delete endpoint 10.0.32.3:9101 for service "openshift-sdn/sdn:metrics"\nI0919 14:48:06.900406    5081 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:48:07.014479    5081 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:48:07.014514    5081 proxier.go:346] userspace syncProxyRules took 114.081633ms\nI0919 14:48:07.014526    5081 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:48:07.014555    5081 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:48:07.199878    5081 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:48:07.270668    5081 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:48:07.270719    5081 proxier.go:346] userspace syncProxyRules took 70.816935ms\nI0919 14:48:07.270731    5081 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nF0919 14:48:12.460869    5081 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 19 14:48:28.787 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 19 14:48:36.104 E ns/openshift-multus pod/multus-sfdqn node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 14:48:44.643 E ns/openshift-sdn pod/sdn-9lm5q node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): start\nI0919 14:48:34.117924    4939 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.0.2:9101 10.0.0.3:9101 10.0.0.5:9101 10.0.32.3:9101 10.0.32.4:9101]\nI0919 14:48:34.117963    4939 roundrobin.go:240] Delete endpoint 10.0.32.2:9101 for service "openshift-sdn/sdn:metrics"\nI0919 14:48:34.306371    4939 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:48:34.406856    4939 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:48:34.406887    4939 proxier.go:346] userspace syncProxyRules took 100.48372ms\nI0919 14:48:34.406898    4939 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:48:34.406908    4939 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:48:34.580994    4939 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:48:34.702423    4939 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:48:34.702458    4939 proxier.go:346] userspace syncProxyRules took 121.436674ms\nI0919 14:48:34.702472    4939 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:48:43.139954    4939 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.68:6443 10.129.0.59:6443 10.130.0.54:6443]\nI0919 14:48:43.140131    4939 roundrobin.go:240] Delete endpoint 10.129.0.59:6443 for service "openshift-multus/multus-admission-controller:"\nI0919 14:48:43.140213    4939 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:48:43.306178    4939 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:48:43.384642    4939 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:48:43.384669    4939 proxier.go:346] userspace syncProxyRules took 78.464154ms\nI0919 14:48:43.384679    4939 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nF0919 14:48:43.921494    4939 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 19 14:48:47.158 E ns/openshift-service-ca pod/apiservice-cabundle-injector-5596b6b6fc-7sq8z node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Sep 19 14:48:52.491 E ns/openshift-controller-manager pod/controller-manager-njp9s node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 255 (Error): 
Sep 19 14:48:56.525 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 19 14:49:11.945 E ns/openshift-sdn pod/sdn-z6b8m node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): er/controller-manager:https to [10.128.0.66:8443 10.129.0.57:8443 10.130.0.50:8443]\nI0919 14:49:01.529368   64744 roundrobin.go:240] Delete endpoint 10.130.0.50:8443 for service "openshift-controller-manager/controller-manager:https"\nI0919 14:49:01.638338   64744 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:49:01.713826   64744 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:49:01.713853   64744 proxier.go:346] userspace syncProxyRules took 75.493388ms\nI0919 14:49:01.713865   64744 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:49:01.713876   64744 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:49:01.892934   64744 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:49:01.979729   64744 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:49:01.979759   64744 proxier.go:346] userspace syncProxyRules took 86.791948ms\nI0919 14:49:01.979768   64744 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:49:10.008467   64744 roundrobin.go:310] LoadBalancerRR: Setting endpoints for e2e-k8s-service-upgrade-9061/service-test: to [10.131.0.15:80]\nI0919 14:49:10.008518   64744 roundrobin.go:240] Delete endpoint 10.129.2.16:80 for service "e2e-k8s-service-upgrade-9061/service-test:"\nI0919 14:49:10.008561   64744 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:49:10.170154   64744 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:49:10.246389   64744 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:49:10.246417   64744 proxier.go:346] userspace syncProxyRules took 76.231917ms\nI0919 14:49:10.246427   64744 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:49:11.428298   64744 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0919 14:49:11.428343   64744 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 19 14:49:33.765 E ns/openshift-sdn pod/sdn-f57b2 node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): 4:49:21.868035   59613 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:49:21.868065   59613 proxier.go:346] userspace syncProxyRules took 85.288112ms\nI0919 14:49:21.868075   59613 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:49:22.347421   59613 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-dns/dns-default:dns to [10.128.0.69:5353 10.128.2.24:5353 10.129.0.60:5353 10.129.2.26:5353 10.130.0.53:5353 10.131.0.22:5353]\nI0919 14:49:22.347460   59613 roundrobin.go:240] Delete endpoint 10.128.0.69:5353 for service "openshift-dns/dns-default:dns"\nI0919 14:49:22.347478   59613 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-dns/dns-default:dns-tcp to [10.128.0.69:5353 10.128.2.24:5353 10.129.0.60:5353 10.129.2.26:5353 10.130.0.53:5353 10.131.0.22:5353]\nI0919 14:49:22.347488   59613 roundrobin.go:240] Delete endpoint 10.128.0.69:5353 for service "openshift-dns/dns-default:dns-tcp"\nI0919 14:49:22.347498   59613 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-dns/dns-default:metrics to [10.128.0.69:9153 10.128.2.24:9153 10.129.0.60:9153 10.129.2.26:9153 10.130.0.53:9153 10.131.0.22:9153]\nI0919 14:49:22.347507   59613 roundrobin.go:240] Delete endpoint 10.128.0.69:9153 for service "openshift-dns/dns-default:metrics"\nI0919 14:49:22.347598   59613 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:49:22.541777   59613 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:49:22.631248   59613 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:49:22.631277   59613 proxier.go:346] userspace syncProxyRules took 89.470718ms\nI0919 14:49:22.631287   59613 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:49:32.802954   59613 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0919 14:49:32.803043   59613 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 19 14:49:52.334 E ns/openshift-sdn pod/sdn-q9gkf node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error):    24299 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:49:22.601196   24299 proxier.go:346] userspace syncProxyRules took 81.69521ms\nI0919 14:49:22.601213   24299 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:49:33.775264   24299 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.0.2:9101 10.0.0.3:9101 10.0.0.5:9101 10.0.32.2:9101 10.0.32.3:9101]\nI0919 14:49:33.775305   24299 roundrobin.go:240] Delete endpoint 10.0.32.4:9101 for service "openshift-sdn/sdn:metrics"\nI0919 14:49:33.775363   24299 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:49:33.958612   24299 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:49:34.040278   24299 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:49:34.040305   24299 proxier.go:346] userspace syncProxyRules took 81.665936ms\nI0919 14:49:34.040315   24299 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:49:38.706741   24299 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.0.2:9101 10.0.0.3:9101 10.0.0.5:9101 10.0.32.2:9101 10.0.32.3:9101 10.0.32.4:9101]\nI0919 14:49:38.706787   24299 roundrobin.go:240] Delete endpoint 10.0.32.4:9101 for service "openshift-sdn/sdn:metrics"\nI0919 14:49:38.706846   24299 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:49:39.087061   24299 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:49:39.184327   24299 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:49:39.184351   24299 proxier.go:346] userspace syncProxyRules took 97.264575ms\nI0919 14:49:39.184364   24299 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:49:51.297107   24299 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0919 14:49:51.297153   24299 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 19 14:50:09.435 E ns/openshift-multus pod/multus-wptpl node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 137 (Error): 
Sep 19 14:54:52.663 E ns/openshift-machine-config-operator pod/machine-config-controller-7c558d48c-lsj97 node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=machine-config-controller container exited with code 2 (Error):  14:48:56.330104       1 node_controller.go:433] Pool master: node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal is now reporting unready: node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal is reporting NotReady\nI0919 14:48:56.333561       1 node_controller.go:435] Pool worker: node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal is now reporting ready\nI0919 14:49:07.813743       1 node_controller.go:435] Pool master: node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal is now reporting ready\nI0919 14:49:43.856360       1 node_controller.go:433] Pool master: node ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal is now reporting unready: node ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal is reporting NotReady\nI0919 14:49:56.129030       1 node_controller.go:433] Pool worker: node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal is now reporting unready: node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal is reporting NotReady\nI0919 14:50:06.148491       1 node_controller.go:435] Pool worker: node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal is now reporting ready\nI0919 14:50:16.966494       1 node_controller.go:433] Pool master: node ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal is now reporting unready: node ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal is reporting NotReady\nI0919 14:50:23.900389       1 node_controller.go:435] Pool master: node ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal is now reporting ready\nI0919 14:51:07.003194       1 node_controller.go:435] Pool master: node ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal is now reporting ready\nI0919 14:51:11.940904       1 node_controller.go:433] Pool worker: node ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal is now reporting unready: node ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal is reporting NotReady\nI0919 14:51:51.989393       1 node_controller.go:435] Pool worker: node ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal is now reporting ready\n
Sep 19 14:56:56.920 E ns/openshift-machine-config-operator pod/machine-config-server-qjk58 node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=machine-config-server container exited with code 2 (Error): I0919 14:23:38.435851       1 start.go:38] Version: machine-config-daemon-4.2.33-202006040618-2-g204f5642-dirty (204f5642bd3adbfa5c85e74958223fb0cf8ad2db)\nI0919 14:23:38.437352       1 api.go:51] Launching server on :22624\nI0919 14:23:38.437432       1 api.go:51] Launching server on :22623\nI0919 14:24:39.050043       1 api.go:97] Pool worker requested by 35.237.121.228:32896\nI0919 14:24:39.650855       1 api.go:97] Pool worker requested by 35.237.121.228:1024\nI0919 14:24:40.568375       1 api.go:97] Pool worker requested by 35.237.121.228:32768\n
Sep 19 14:56:59.514 E ns/openshift-ingress pod/router-default-6bdff56b44-72wm6 node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): pt(s).\nI0919 14:48:44.166962       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:48:49.168605       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:48:54.172750       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:48:59.153355       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:49:04.160491       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:49:10.052299       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:49:15.062552       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:49:20.051093       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:49:25.050631       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:49:33.817782       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:49:38.941526       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:49:54.909802       1 logs.go:49] http: TLS handshake error from 10.128.2.1:40684: EOF\nI0919 14:49:54.955626       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:49:59.953431       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:56:57.801792       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Sep 19 14:56:59.540 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2020/09/19 14:43:44 Watching directory: "/etc/alertmanager/config"\n
Sep 19 14:56:59.540 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/19 14:43:46 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 14:43:46 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 14:43:46 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 14:43:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 14:43:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 14:43:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 14:43:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 14:43:46 http.go:106: HTTPS: listening on [::]:9095\n
Sep 19 14:57:06.411 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-85cc4dd88f-cxhsl node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=openshift-apiserver-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:57:07.035 E ns/openshift-machine-config-operator pod/machine-config-server-sr4cz node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=machine-config-server container exited with code 2 (Error): I0919 14:23:40.877665       1 start.go:38] Version: machine-config-daemon-4.2.33-202006040618-2-g204f5642-dirty (204f5642bd3adbfa5c85e74958223fb0cf8ad2db)\nI0919 14:23:40.879974       1 api.go:51] Launching server on :22624\nI0919 14:23:40.880100       1 api.go:51] Launching server on :22623\n
Sep 19 14:57:08.011 E ns/openshift-machine-config-operator pod/machine-config-controller-646d8dbf67-dck5z node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=machine-config-controller container exited with code 2 (Error): tion.openshift.io/v1  } {MachineConfig  99-master-ssh  machineconfiguration.openshift.io/v1  }]\nI0919 14:56:52.212543       1 render_controller.go:516] Pool worker: now targeting: rendered-worker-acfc632a5976a508e7fadb80fb0dc731\nI0919 14:56:52.214049       1 render_controller.go:516] Pool master: now targeting: rendered-master-fd16253b33c344961adee0ada149c259\nI0919 14:56:57.212403       1 node_controller.go:758] Setting node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal to desired config rendered-worker-acfc632a5976a508e7fadb80fb0dc731\nI0919 14:56:57.214598       1 node_controller.go:758] Setting node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal to desired config rendered-master-fd16253b33c344961adee0ada149c259\nI0919 14:56:57.235491       1 node_controller.go:452] Pool worker: node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-acfc632a5976a508e7fadb80fb0dc731\nI0919 14:56:57.237511       1 node_controller.go:452] Pool master: node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-fd16253b33c344961adee0ada149c259\nI0919 14:56:57.572430       1 node_controller.go:452] Pool worker: node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal changed machineconfiguration.openshift.io/state = Working\nI0919 14:56:57.586412       1 node_controller.go:433] Pool worker: node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal is now reporting unready: node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal is reporting Unschedulable\nI0919 14:56:58.263350       1 node_controller.go:452] Pool master: node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal changed machineconfiguration.openshift.io/state = Working\nI0919 14:56:58.291758       1 node_controller.go:433] Pool master: node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal is now reporting unready: node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal is reporting Unschedulable\n
Sep 19 14:57:23.005 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): =info ts=2020-09-19T14:57:17.363Z caller=main.go:332 fd_limits="(soft=1048576, hard=1048576)"\nlevel=info ts=2020-09-19T14:57:17.363Z caller=main.go:333 vm_limits="(soft=unlimited, hard=unlimited)"\nlevel=info ts=2020-09-19T14:57:17.365Z caller=main.go:652 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T14:57:17.365Z caller=web.go:448 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T14:57:17.371Z caller=main.go:667 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T14:57:17.371Z caller=main.go:668 msg="TSDB started"\nlevel=info ts=2020-09-19T14:57:17.371Z caller=main.go:738 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T14:57:17.371Z caller=main.go:521 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T14:57:17.372Z caller=main.go:535 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T14:57:17.372Z caller=main.go:557 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T14:57:17.372Z caller=manager.go:776 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T14:57:17.372Z caller=manager.go:782 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T14:57:17.372Z caller=main.go:517 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T14:57:17.372Z caller=main.go:551 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T14:57:17.372Z caller=main.go:531 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T14:57:17.374Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T14:57:17.374Z caller=main.go:722 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19T14:57:17.374Z caller=main.go:731 err="error loading config from \"/etc/prometheus/config_out/prometheus.env.yaml\": couldn't load configuration (--config.file=\"/etc/prometheus/config_out/prometheus.env.yaml\"): open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 19 14:59:16.201 E ns/openshift-apiserver pod/apiserver-txb6b node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=openshift-apiserver container exited with code 255 (Error): 8s_internal_local_delegation_chain_0000000007\nI0919 14:57:47.097521       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000008\nI0919 14:57:47.097574       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000011\nI0919 14:57:48.097128       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001\nI0919 14:57:48.097423       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002\nI0919 14:57:48.097488       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000005\nI0919 14:57:48.097559       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000009\nI0919 14:57:48.097616       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000010\nI0919 14:57:48.097663       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000012\nI0919 14:57:48.097709       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000003\nI0919 14:57:48.097754       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000004\nI0919 14:57:48.097812       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000006\nI0919 14:57:48.097855       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000007\nI0919 14:57:48.097897       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000008\nI0919 14:57:48.097947       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000011\n
Sep 19 14:59:16.600 E ns/openshift-cluster-node-tuning-operator pod/tuned-mw55w node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=tuned container exited with code 255 (Error): ller-5-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal) labels changed node wide: false\nI0919 14:57:01.392932   62621 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-3-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal) labels changed node wide: false\nI0919 14:57:01.592743   62621 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-2-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal) labels changed node wide: false\nI0919 14:57:01.793450   62621 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-6-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal) labels changed node wide: true\nI0919 14:57:05.185021   62621 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:57:05.187993   62621 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:57:05.321490   62621 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 14:57:05.621320   62621 openshift-tuned.go:550] Pod (openshift-machine-api/machine-api-operator-547565fd54-chcrf) labels changed node wide: true\nI0919 14:57:10.185042   62621 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:57:10.188347   62621 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:57:10.328127   62621 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 14:57:11.219319   62621 openshift-tuned.go:550] Pod (openshift-machine-api/cluster-autoscaler-operator-b66d87775-mhtn6) labels changed node wide: true\nI0919 14:57:15.185002   62621 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:57:15.189833   62621 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:57:15.366808   62621 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Sep 19 14:59:18.464 E ns/openshift-cluster-node-tuning-operator pod/tuned-bs6h9 node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=tuned container exited with code 143 (Error): I0919 14:48:22.109659   47383 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:48:22.234170   47383 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:48:28.714636   47383 openshift-tuned.go:550] Pod (openshift-dns/dns-default-lpjml) labels changed node wide: true\nI0919 14:48:32.106843   47383 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:48:32.109918   47383 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:48:32.230998   47383 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:49:10.511716   47383 openshift-tuned.go:550] Pod (openshift-sdn/ovs-xz99x) labels changed node wide: true\nI0919 14:49:12.106769   47383 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:49:12.120282   47383 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:49:12.263298   47383 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:49:23.097681   47383 openshift-tuned.go:852] Lowering resyncPeriod to 68\nI0919 14:54:00.519784   47383 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-daemon-pqdms) labels changed node wide: true\nI0919 14:54:02.106802   47383 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:54:02.109970   47383 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:54:02.232013   47383 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:57:48.778776   47383 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0919 14:57:48.783262   47383 openshift-tuned.go:881] Pod event watch channel closed.\nI0919 14:57:48.783293   47383 openshift-tuned.go:883] Increasing resyncPeriod to 136\n
Sep 19 14:59:19.018 E ns/openshift-image-registry pod/node-ca-bpw6g node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=node-ca container exited with code 255 (Error): 
Sep 19 14:59:20.200 E ns/openshift-multus pod/multus-2pptr node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 255 (Error): 
Sep 19 14:59:21.799 E ns/openshift-controller-manager pod/controller-manager-njp9s node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 255 (Error): 
Sep 19 14:59:22.999 E ns/openshift-multus pod/multus-admission-controller-skcz2 node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=multus-admission-controller container exited with code 255 (Error): 
Sep 19 14:59:23.400 E ns/openshift-dns pod/dns-default-g7ckk node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=dns container exited with code 255 (Error): .:5353\n2020-09-19T14:47:43.692Z [INFO] plugin/reload: Running configuration MD5 = acefebac40c697acb35b9e96ca3c7ec9\n2020-09-19T14:47:43.693Z [INFO] CoreDNS-1.5.2\n2020-09-19T14:47:43.693Z [INFO] linux/amd64, go1.12.9, \nCoreDNS-1.5.2\nlinux/amd64, go1.12.9, \n[INFO] SIGTERM: Shutting down servers then terminating\n
Sep 19 14:59:23.400 E ns/openshift-dns pod/dns-default-g7ckk node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (86) - No such process\n
Sep 19 14:59:24.199 E ns/openshift-machine-config-operator pod/machine-config-server-f8nz5 node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=machine-config-server container exited with code 255 (Error): I0919 14:56:55.682100       1 start.go:38] Version: machine-config-daemon-4.2.33-202006040618-2-g204f5642-dirty (204f5642bd3adbfa5c85e74958223fb0cf8ad2db)\nI0919 14:56:55.683456       1 api.go:51] Launching server on :22624\nI0919 14:56:55.683563       1 api.go:51] Launching server on :22623\n
Sep 19 14:59:25.003 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:13.319536       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:13.319909       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:18.326823       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:18.327238       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:23.335163       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:23.342082       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:28.351130       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:28.351491       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:33.359539       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:33.360201       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:38.369120       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:38.369477       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:43.379395       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:43.379748       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:48.388227       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:48.388569       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Sep 19 14:59:25.003 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=kube-controller-manager-6 container exited with code 255 (Error): e to monitor quota for resource "healthchecking.openshift.io/v1alpha1, Resource=machinehealthchecks", couldn't start monitor for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers": unable to monitor quota for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers", couldn't start monitor for resource "operators.coreos.com/v1, Resource=operatorsources": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorsources", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=catalogsources": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=catalogsources", couldn't start monitor for resource "tuned.openshift.io/v1, Resource=tuneds": unable to monitor quota for resource "tuned.openshift.io/v1, Resource=tuneds", couldn't start monitor for resource "operator.openshift.io/v1, Resource=ingresscontrollers": unable to monitor quota for resource "operator.openshift.io/v1, Resource=ingresscontrollers", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=alertmanagers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=alertmanagers", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=installplans": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=installplans", couldn't start monitor for resource "operators.coreos.com/v2, Resource=catalogsourceconfigs": unable to monitor quota for resource "operators.coreos.com/v2, Resource=catalogsourceconfigs", couldn't start monitor for resource "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions": unable to monitor quota for resource "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions"]\nE0919 14:57:48.587341       1 controllermanager.go:287] leaderelection lost\n
Sep 19 14:59:25.401 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=scheduler container exited with code 255 (Error): ge.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope\nE0919 14:44:57.893667       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope\nE0919 14:44:57.896421       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope\nE0919 14:44:57.896512       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope\nE0919 14:44:57.983411       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0919 14:44:57.984347       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope\nW0919 14:57:14.071606       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolume ended with: too old resource version: 22783 (29480)\nW0919 14:57:14.153042       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.Service ended with: too old resource version: 22785 (29481)\nW0919 14:57:14.381887       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.ReplicationController ended with: too old resource version: 25800 (29481)\nF0919 14:57:48.586731       1 server.go:247] leaderelection lost\n
Sep 19 14:59:25.802 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=etcd-member container exited with code 255 (Error):  rafthttp: stopped streaming with peer 5c712ff1661feac3 (writer)\n2020-09-19 14:57:48.620123 I | rafthttp: closed the TCP streaming connection with peer 5c712ff1661feac3 (stream Message writer)\n2020-09-19 14:57:48.620211 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (writer)\n2020-09-19 14:57:48.620386 I | rafthttp: stopped HTTP pipelining with peer 5c712ff1661feac3\n2020-09-19 14:57:48.620527 W | rafthttp: lost the TCP streaming connection with peer 5c712ff1661feac3 (stream MsgApp v2 reader)\n2020-09-19 14:57:48.620582 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (stream MsgApp v2 reader)\n2020-09-19 14:57:48.620679 W | rafthttp: lost the TCP streaming connection with peer 5c712ff1661feac3 (stream Message reader)\n2020-09-19 14:57:48.620731 E | rafthttp: failed to read 5c712ff1661feac3 on stream Message (context canceled)\n2020-09-19 14:57:48.620763 I | rafthttp: peer 5c712ff1661feac3 became inactive (message send to peer failed)\n2020-09-19 14:57:48.620791 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (stream Message reader)\n2020-09-19 14:57:48.620842 I | rafthttp: stopped peer 5c712ff1661feac3\n2020-09-19 14:57:48.633939 I | embed: rejected connection from "10.0.0.5:57238" (error "read tcp 10.0.0.3:2380->10.0.0.5:57238: use of closed network connection", ServerName "ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com")\n2020-09-19 14:57:48.634094 I | embed: rejected connection from "10.0.0.5:57236" (error "read tcp 10.0.0.3:2380->10.0.0.5:57236: use of closed network connection", ServerName "ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com")\n2020-09-19 14:57:48.637941 I | embed: rejected connection from "10.0.0.2:57006" (error "set tcp 10.0.0.3:2380: use of closed network connection", ServerName "ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com")\n2020-09-19 14:57:48.655141 I | embed: rejected connection from "10.0.0.2:57008" (error "set tcp 10.0.0.3:2380: use of closed network connection", ServerName "ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com")\n
Sep 19 14:59:25.802 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=etcd-metrics container exited with code 255 (Error): 2020-09-19 14:57:19.313514 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 14:57:19.314497 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 14:57:19.314993 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 14:57:19.333324 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Sep 19 14:59:26.198 E ns/kube-system pod/gcp-routes-controller-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=gcp-routes-controller container exited with code 255 (Error): I0919 14:57:14.829786   98000 run.go:51] Version: machine-config-daemon-4.2.33-202006040618-2-g204f5642-dirty (204f5642bd3adbfa5c85e74958223fb0cf8ad2db)\nI0919 14:57:14.830095   98000 run.go:54] Calling chroot("/rootfs")\n2020/09/19 14:57:14 [DEBUG] Starting checker name=dependency-check\nI0919 14:57:19.839620   98000 run.go:164] Running OnSuccess trigger\n
Sep 19 14:59:30.803 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error):        1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 14:54:58.091648       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:54:58.092048       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 14:54:58.299041       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:54:58.299326       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Sep 19 14:59:30.803 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=kube-apiserver-insecure-readyz-7 container exited with code 255 (Error): I0919 14:44:53.452777       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Sep 19 14:59:30.803 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=kube-apiserver-7 container exited with code 255 (Error): i-int-gce.dev.openshift.com:2379 <nil>}]\nW0919 14:57:30.116040       1 asm_amd64.s:1337] Failed to dial etcd-2.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379: grpc: the connection is closing; please retry.\nW0919 14:57:30.116071       1 asm_amd64.s:1337] Failed to dial etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379: grpc: the connection is closing; please retry.\nW0919 14:57:30.125980       1 reflector.go:293] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: watch of *v1.ClusterResourceQuota ended with: too old resource version: 26421 (29663)\nI0919 14:57:31.396462       1 controller.go:606] quota admission added evaluator for: prometheuses.monitoring.coreos.com\nI0919 14:57:31.396654       1 controller.go:606] quota admission added evaluator for: prometheuses.monitoring.coreos.com\nI0919 14:57:31.631880       1 aggregator.go:222] Updating OpenAPI spec because k8s_internal_local_delegation_chain_0000000002 is updated\nI0919 14:57:33.568422       1 aggregator.go:225] Finished OpenAPI spec generation after 1.936498224s\nI0919 14:57:33.568677       1 controller.go:107] OpenAPI AggregationController: Processing item v1.build.openshift.io\nI0919 14:57:33.804982       1 controller.go:107] OpenAPI AggregationController: Processing item v1.project.openshift.io\nI0919 14:57:35.858968       1 aggregator.go:222] Updating OpenAPI spec because k8s_internal_local_delegation_chain_0000000002 is updated\nI0919 14:57:37.729329       1 aggregator.go:225] Finished OpenAPI spec generation after 1.870201916s\nI0919 14:57:48.587414       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nI0919 14:57:48.587763       1 genericapiserver.go:546] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\n
Sep 19 14:59:31.603 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:13.319536       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:13.319909       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:18.326823       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:18.327238       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:23.335163       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:23.342082       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:28.351130       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:28.351491       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:33.359539       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:33.360201       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:38.369120       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:38.369477       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:43.379395       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:43.379748       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:48.388227       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:48.388569       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Sep 19 14:59:31.603 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=kube-controller-manager-6 container exited with code 255 (Error): e to monitor quota for resource "healthchecking.openshift.io/v1alpha1, Resource=machinehealthchecks", couldn't start monitor for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers": unable to monitor quota for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers", couldn't start monitor for resource "operators.coreos.com/v1, Resource=operatorsources": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorsources", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=catalogsources": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=catalogsources", couldn't start monitor for resource "tuned.openshift.io/v1, Resource=tuneds": unable to monitor quota for resource "tuned.openshift.io/v1, Resource=tuneds", couldn't start monitor for resource "operator.openshift.io/v1, Resource=ingresscontrollers": unable to monitor quota for resource "operator.openshift.io/v1, Resource=ingresscontrollers", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=alertmanagers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=alertmanagers", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=installplans": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=installplans", couldn't start monitor for resource "operators.coreos.com/v2, Resource=catalogsourceconfigs": unable to monitor quota for resource "operators.coreos.com/v2, Resource=catalogsourceconfigs", couldn't start monitor for resource "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions": unable to monitor quota for resource "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions"]\nE0919 14:57:48.587341       1 controllermanager.go:287] leaderelection lost\n
Sep 19 14:59:32.003 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=etcd-member container exited with code 255 (Error):  rafthttp: stopped streaming with peer 5c712ff1661feac3 (writer)\n2020-09-19 14:57:48.620123 I | rafthttp: closed the TCP streaming connection with peer 5c712ff1661feac3 (stream Message writer)\n2020-09-19 14:57:48.620211 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (writer)\n2020-09-19 14:57:48.620386 I | rafthttp: stopped HTTP pipelining with peer 5c712ff1661feac3\n2020-09-19 14:57:48.620527 W | rafthttp: lost the TCP streaming connection with peer 5c712ff1661feac3 (stream MsgApp v2 reader)\n2020-09-19 14:57:48.620582 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (stream MsgApp v2 reader)\n2020-09-19 14:57:48.620679 W | rafthttp: lost the TCP streaming connection with peer 5c712ff1661feac3 (stream Message reader)\n2020-09-19 14:57:48.620731 E | rafthttp: failed to read 5c712ff1661feac3 on stream Message (context canceled)\n2020-09-19 14:57:48.620763 I | rafthttp: peer 5c712ff1661feac3 became inactive (message send to peer failed)\n2020-09-19 14:57:48.620791 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (stream Message reader)\n2020-09-19 14:57:48.620842 I | rafthttp: stopped peer 5c712ff1661feac3\n2020-09-19 14:57:48.633939 I | embed: rejected connection from "10.0.0.5:57238" (error "read tcp 10.0.0.3:2380->10.0.0.5:57238: use of closed network connection", ServerName "ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com")\n2020-09-19 14:57:48.634094 I | embed: rejected connection from "10.0.0.5:57236" (error "read tcp 10.0.0.3:2380->10.0.0.5:57236: use of closed network connection", ServerName "ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com")\n2020-09-19 14:57:48.637941 I | embed: rejected connection from "10.0.0.2:57006" (error "set tcp 10.0.0.3:2380: use of closed network connection", ServerName "ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com")\n2020-09-19 14:57:48.655141 I | embed: rejected connection from "10.0.0.2:57008" (error "set tcp 10.0.0.3:2380: use of closed network connection", ServerName "ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com")\n
Sep 19 14:59:32.003 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=etcd-metrics container exited with code 255 (Error): 2020-09-19 14:57:19.313514 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 14:57:19.314497 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 14:57:19.314993 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 14:57:19.333324 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Sep 19 14:59:33.603 E ns/openshift-cluster-node-tuning-operator pod/tuned-l9ksx node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=tuned container exited with code 255 (Error): (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:54:37.969766    7309 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-daemon-bsm8g) labels changed node wide: true\nI0919 14:54:40.348052    7309 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:54:40.351070    7309 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:54:40.482553    7309 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:57:00.294017    7309 openshift-tuned.go:550] Pod (openshift-marketplace/community-operators-74f789bf64-zzn8r) labels changed node wide: true\nI0919 14:57:00.348051    7309 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:57:00.352751    7309 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:57:00.480943    7309 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:57:00.892711    7309 openshift-tuned.go:550] Pod (openshift-monitoring/telemeter-client-867bbb54f7-5sxw8) labels changed node wide: true\nI0919 14:57:05.347960    7309 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:57:05.351191    7309 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:57:05.478949    7309 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 14:57:37.966791    7309 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-4403/foo-5dl7z) labels changed node wide: true\nI0919 14:57:40.347918    7309 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 14:57:40.351198    7309 openshift-tuned.go:441] Getting recommended profile...\nI0919 14:57:40.527223    7309 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Sep 19 14:59:34.159 E ns/openshift-image-registry pod/node-ca-4zlll node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=node-ca container exited with code 255 (Error): 
Sep 19 14:59:34.538 E ns/openshift-monitoring pod/node-exporter-8t6nl node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 255 (Error): 20-09-19T14:45:40Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T14:45:40Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:59:34.538 E ns/openshift-monitoring pod/node-exporter-8t6nl node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 255 (Error): I0919 14:45:41.081298   16947 main.go:241] Reading certificate files\nI0919 14:45:41.081695   16947 main.go:274] Starting TCP socket on 10.0.32.2:9100\nI0919 14:45:41.081971   16947 main.go:281] Listening securely on 10.0.32.2:9100\nI0919 14:58:11.944551   16947 main.go:336] received interrupt, shutting down\nE0919 14:58:11.944830   16947 main.go:289] failed to gracefully close secure listener: close tcp 10.0.32.2:9100: use of closed network connection\n
Sep 19 14:59:34.907 E ns/openshift-dns pod/dns-default-4j5w7 node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=dns container exited with code 255 (Error): .:5353\n2020-09-19T14:49:02.719Z [INFO] plugin/reload: Running configuration MD5 = acefebac40c697acb35b9e96ca3c7ec9\n2020-09-19T14:49:02.719Z [INFO] CoreDNS-1.5.2\n2020-09-19T14:49:02.719Z [INFO] linux/amd64, go1.12.9, \nCoreDNS-1.5.2\nlinux/amd64, go1.12.9, \n[INFO] SIGTERM: Shutting down servers then terminating\n
Sep 19 14:59:34.907 E ns/openshift-dns pod/dns-default-4j5w7 node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (77) - No such process\n
Sep 19 14:59:35.291 E ns/openshift-sdn pod/sdn-q9gkf node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): 0] Delete endpoint 10.131.0.29:9092 for service "openshift-monitoring/prometheus-k8s:tenancy"\nI0919 14:57:24.017618   28193 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-monitoring/prometheus-operated:web to [10.129.2.24:9091 10.131.0.29:9091]\nI0919 14:57:24.017640   28193 roundrobin.go:240] Delete endpoint 10.131.0.29:9091 for service "openshift-monitoring/prometheus-operated:web"\nI0919 14:57:24.017649   28193 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:57:24.202665   28193 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:57:24.279720   28193 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:57:24.279758   28193 proxier.go:346] userspace syncProxyRules took 77.060369ms\nI0919 14:57:24.279774   28193 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:57:24.279792   28193 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:57:24.461098   28193 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:57:24.554449   28193 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:57:24.554492   28193 proxier.go:346] userspace syncProxyRules took 93.371636ms\nI0919 14:57:24.554504   28193 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 14:57:48.663382   28193 roundrobin.go:310] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.0.2:6443 10.0.0.5:6443]\nI0919 14:57:48.663435   28193 roundrobin.go:240] Delete endpoint 10.0.0.3:6443 for service "default/kubernetes:https"\nI0919 14:57:48.663490   28193 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 14:57:48.891529   28193 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 14:57:49.001735   28193 proxier.go:367] userspace proxy: processing 0 service events\nI0919 14:57:49.001760   28193 proxier.go:346] userspace syncProxyRules took 110.195167ms\nI0919 14:57:49.001773   28193 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\ninterrupt: Gracefully shutting down ...\n
Sep 19 14:59:35.670 E ns/openshift-multus pod/multus-d8qwn node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 255 (Error): 
Sep 19 14:59:36.071 E ns/openshift-sdn pod/ovs-l725z node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=openvswitch container exited with code 255 (Error): 38163c on port 3\n2020-09-19T14:56:58.426Z|00123|connmgr|INFO|br0<->unix#153: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:56:58.480Z|00124|connmgr|INFO|br0<->unix#156: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:56:58.539Z|00125|bridge|INFO|bridge br0: deleted interface veth9a8c7620 on port 5\n2020-09-19T14:56:58.616Z|00126|connmgr|INFO|br0<->unix#159: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:56:58.681Z|00127|connmgr|INFO|br0<->unix#162: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:56:58.726Z|00128|bridge|INFO|bridge br0: deleted interface veth6f32d431 on port 6\n2020-09-19T14:56:58.777Z|00129|connmgr|INFO|br0<->unix#165: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:56:58.822Z|00130|connmgr|INFO|br0<->unix#168: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:56:58.858Z|00131|bridge|INFO|bridge br0: deleted interface vethea10591d on port 7\n2020-09-19T14:56:58.910Z|00132|connmgr|INFO|br0<->unix#171: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:56:58.958Z|00133|connmgr|INFO|br0<->unix#174: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:56:58.999Z|00134|bridge|INFO|bridge br0: deleted interface veth0605d7bf on port 4\n2020-09-19T14:56:59.050Z|00135|connmgr|INFO|br0<->unix#177: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:56:59.096Z|00136|connmgr|INFO|br0<->unix#180: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:56:59.133Z|00137|bridge|INFO|bridge br0: deleted interface vetha699102d on port 12\n2020-09-19T14:56:58.714Z|00013|jsonrpc|WARN|unix#112: receive error: Connection reset by peer\n2020-09-19T14:56:58.714Z|00014|reconnect|WARN|unix#112: connection dropped (Connection reset by peer)\n2020-09-19T14:57:27.964Z|00138|connmgr|INFO|br0<->unix#186: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T14:57:27.995Z|00139|connmgr|INFO|br0<->unix#189: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T14:57:28.025Z|00140|bridge|INFO|bridge br0: deleted interface veth432b7f8f on port 8\novs-vswitchd is not running.\novsdb-server is not running.\n
Sep 19 14:59:36.472 E ns/openshift-machine-config-operator pod/machine-config-daemon-f6mw6 node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=machine-config-daemon container exited with code 255 (Error): 549ee6b0ce2560c145210d44072ede57d27571889f98e4\nI0919 14:57:39.011584   45266 run.go:16] Running: podman pull -q --authfile /var/lib/kubelet/config.json registry.svc.ci.openshift.org/ocp/4.2-2020-09-19-113420@sha256:d0bf79a4730992d8e2a47ed580c81b50841a2180db4e3f81ad8d2b91cf153eb4\n2020-09-19 14:57:39.245872132 +0000 UTC m=+0.104107698 system refresh\n2020-09-19 14:57:58.970615458 +0000 UTC m=+19.828850997 image pull  \ne205650b2a4292cc6189b48a52919ade9be095d6914aba1a3273eef4b4ddfb72\nI0919 14:57:58.979691   45266 rpm-ostree.go:357] Running captured: podman inspect --type=image registry.svc.ci.openshift.org/ocp/4.2-2020-09-19-113420@sha256:d0bf79a4730992d8e2a47ed580c81b50841a2180db4e3f81ad8d2b91cf153eb4\nI0919 14:57:59.085601   45266 rpm-ostree.go:357] Running captured: podman create --net=none --annotation=org.openshift.machineconfigoperator.pivot=true --name ostree-container-pivot-81f243ca-fa88-11ea-9a8a-42010a002002 registry.svc.ci.openshift.org/ocp/4.2-2020-09-19-113420@sha256:d0bf79a4730992d8e2a47ed580c81b50841a2180db4e3f81ad8d2b91cf153eb4\nI0919 14:57:59.235760   45266 rpm-ostree.go:357] Running captured: podman mount bc1036e7cf24247884c1ee8d4f432253c2b531489b566ca8e6fa38bfa68ee3ed\nI0919 14:57:59.344706   45266 rpm-ostree.go:238] Pivoting to: 42.81.20200919.0 (2e2d1f95863f994a79706245ce124c822940b7f6eec902b4ddf185ad36d7e601)\nclient(id:cli dbus:1.549 unit:machine-config-daemon-host.service uid:0) added; new total=1\nInitiated txn UpdateDeployment for client(id:cli dbus:1.549 unit:machine-config-daemon-host.service uid:0): /org/projectatomic/rpmostree1/rhcos\nsanitycheck(/usr/bin/true) successful\nTxn UpdateDeployment on /org/projectatomic/rpmostree1/rhcos successful\nclient(id:cli dbus:1.549 unit:machine-config-daemon-host.service uid:0) vanished; remaining=0\nIn idle state; will auto-exit in 60 seconds\nI0919 14:58:11.865040   37800 update.go:993] initiating reboot: Node will reboot into config rendered-worker-acfc632a5976a508e7fadb80fb0dc731\nI0919 14:58:11.958613   37800 daemon.go:505] Shutting down MachineConfigDaemon\n
Sep 19 14:59:37.007 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-5866b8867-c97ft node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=cluster-node-tuning-operator container exited with code 255 (Error): Error on reading termination message from logs: failed to open log file "/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-5866b8867-c97ft_6978dfb9-fa86-11ea-84fb-42010a000003/cluster-node-tuning-operator/0.log": open /var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-5866b8867-c97ft_6978dfb9-fa86-11ea-84fb-42010a000003/cluster-node-tuning-operator/0.log: no such file or directory
Sep 19 14:59:37.214 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=scheduler container exited with code 255 (Error): ge.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope\nE0919 14:44:57.893667       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope\nE0919 14:44:57.896421       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope\nE0919 14:44:57.896512       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope\nE0919 14:44:57.983411       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0919 14:44:57.984347       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope\nW0919 14:57:14.071606       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolume ended with: too old resource version: 22783 (29480)\nW0919 14:57:14.153042       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.Service ended with: too old resource version: 22785 (29481)\nW0919 14:57:14.381887       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.ReplicationController ended with: too old resource version: 25800 (29481)\nF0919 14:57:48.586731       1 server.go:247] leaderelection lost\n
Sep 19 14:59:38.814 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-5c8485589b-hchtk node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:59:49.558 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 19 14:59:50.009 E ns/openshift-machine-config-operator pod/machine-config-controller-646d8dbf67-4wc4h node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=machine-config-controller container exited with code 2 (Error): erRuntimeConfigController\nI0919 14:59:05.813494       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool worker\nI0919 14:59:05.914954       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool master\nI0919 14:59:06.065356       1 kubelet_config_controller.go:159] Starting MachineConfigController-KubeletConfigController\nI0919 14:59:25.660769       1 node_controller.go:433] Pool master: node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal is now reporting unready: node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal is reporting Unschedulable\nI0919 14:59:26.725023       1 node_controller.go:442] Pool master: node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal has completed update to rendered-master-fd16253b33c344961adee0ada149c259\nI0919 14:59:26.738289       1 node_controller.go:435] Pool master: node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal is now reporting ready\nI0919 14:59:29.245833       1 node_controller.go:433] Pool worker: node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal is now reporting unready: node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal is reporting NotReady\nI0919 14:59:30.661229       1 node_controller.go:758] Setting node ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal to desired config rendered-master-fd16253b33c344961adee0ada149c259\nI0919 14:59:30.682447       1 node_controller.go:452] Pool master: node ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-fd16253b33c344961adee0ada149c259\nI0919 14:59:31.703533       1 node_controller.go:452] Pool master: node ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal changed machineconfiguration.openshift.io/state = Working\nI0919 14:59:31.727587       1 node_controller.go:433] Pool master: node ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal is now reporting unready: node ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal is reporting Unschedulable\n
Sep 19 14:59:51.208 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-b7c5f4b4b-qw8zm node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:59:53.209 E ns/openshift-machine-api pod/machine-api-controllers-6dcdb9df6d-rbbz6 node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:59:53.209 E ns/openshift-machine-api pod/machine-api-controllers-6dcdb9df6d-rbbz6 node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=nodelink-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:59:53.209 E ns/openshift-machine-api pod/machine-api-controllers-6dcdb9df6d-rbbz6 node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=machine-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 14:59:54.183 E ns/openshift-cluster-node-tuning-operator pod/tuned-4szg8 node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=tuned container exited with code 143 (Error): Failed to execute operation: Unit file tuned.service does not exist.\nI0919 14:59:36.042595    3060 openshift-tuned.go:209] Extracting tuned profiles\nI0919 14:59:36.060385    3060 openshift-tuned.go:739] Resync period to pull node/pod labels: 55 [s]\nE0919 14:59:42.190588    3060 openshift-tuned.go:881] Get https://172.30.0.1:443/api/v1/nodes/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal: dial tcp 172.30.0.1:443: connect: no route to host\nI0919 14:59:42.190715    3060 openshift-tuned.go:883] Increasing resyncPeriod to 110\n
Sep 19 15:00:00.896 E kube-apiserver Kube API started failing: Get https://api.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces/kube-system?timeout=3s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 19 15:00:08.239 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error):        1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 14:54:58.091648       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:54:58.092048       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 14:54:58.299041       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:54:58.299326       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Sep 19 15:00:08.239 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=kube-apiserver-insecure-readyz-7 container exited with code 255 (Error): I0919 14:44:53.452777       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Sep 19 15:00:08.239 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=kube-apiserver-7 container exited with code 255 (Error): i-int-gce.dev.openshift.com:2379 <nil>}]\nW0919 14:57:30.116040       1 asm_amd64.s:1337] Failed to dial etcd-2.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379: grpc: the connection is closing; please retry.\nW0919 14:57:30.116071       1 asm_amd64.s:1337] Failed to dial etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379: grpc: the connection is closing; please retry.\nW0919 14:57:30.125980       1 reflector.go:293] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: watch of *v1.ClusterResourceQuota ended with: too old resource version: 26421 (29663)\nI0919 14:57:31.396462       1 controller.go:606] quota admission added evaluator for: prometheuses.monitoring.coreos.com\nI0919 14:57:31.396654       1 controller.go:606] quota admission added evaluator for: prometheuses.monitoring.coreos.com\nI0919 14:57:31.631880       1 aggregator.go:222] Updating OpenAPI spec because k8s_internal_local_delegation_chain_0000000002 is updated\nI0919 14:57:33.568422       1 aggregator.go:225] Finished OpenAPI spec generation after 1.936498224s\nI0919 14:57:33.568677       1 controller.go:107] OpenAPI AggregationController: Processing item v1.build.openshift.io\nI0919 14:57:33.804982       1 controller.go:107] OpenAPI AggregationController: Processing item v1.project.openshift.io\nI0919 14:57:35.858968       1 aggregator.go:222] Updating OpenAPI spec because k8s_internal_local_delegation_chain_0000000002 is updated\nI0919 14:57:37.729329       1 aggregator.go:225] Finished OpenAPI spec generation after 1.870201916s\nI0919 14:57:48.587414       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nI0919 14:57:48.587763       1 genericapiserver.go:546] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\n
Sep 19 15:00:08.641 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=etcd-member container exited with code 255 (Error):  rafthttp: stopped streaming with peer 5c712ff1661feac3 (writer)\n2020-09-19 14:57:48.620123 I | rafthttp: closed the TCP streaming connection with peer 5c712ff1661feac3 (stream Message writer)\n2020-09-19 14:57:48.620211 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (writer)\n2020-09-19 14:57:48.620386 I | rafthttp: stopped HTTP pipelining with peer 5c712ff1661feac3\n2020-09-19 14:57:48.620527 W | rafthttp: lost the TCP streaming connection with peer 5c712ff1661feac3 (stream MsgApp v2 reader)\n2020-09-19 14:57:48.620582 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (stream MsgApp v2 reader)\n2020-09-19 14:57:48.620679 W | rafthttp: lost the TCP streaming connection with peer 5c712ff1661feac3 (stream Message reader)\n2020-09-19 14:57:48.620731 E | rafthttp: failed to read 5c712ff1661feac3 on stream Message (context canceled)\n2020-09-19 14:57:48.620763 I | rafthttp: peer 5c712ff1661feac3 became inactive (message send to peer failed)\n2020-09-19 14:57:48.620791 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (stream Message reader)\n2020-09-19 14:57:48.620842 I | rafthttp: stopped peer 5c712ff1661feac3\n2020-09-19 14:57:48.633939 I | embed: rejected connection from "10.0.0.5:57238" (error "read tcp 10.0.0.3:2380->10.0.0.5:57238: use of closed network connection", ServerName "ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com")\n2020-09-19 14:57:48.634094 I | embed: rejected connection from "10.0.0.5:57236" (error "read tcp 10.0.0.3:2380->10.0.0.5:57236: use of closed network connection", ServerName "ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com")\n2020-09-19 14:57:48.637941 I | embed: rejected connection from "10.0.0.2:57006" (error "set tcp 10.0.0.3:2380: use of closed network connection", ServerName "ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com")\n2020-09-19 14:57:48.655141 I | embed: rejected connection from "10.0.0.2:57008" (error "set tcp 10.0.0.3:2380: use of closed network connection", ServerName "ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com")\n
Sep 19 15:00:08.641 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=etcd-metrics container exited with code 255 (Error): 2020-09-19 14:57:19.313514 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 14:57:19.314497 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 14:57:19.314993 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 14:57:19.333324 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Sep 19 15:00:09.452 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal container=scheduler container exited with code 255 (Error): ge.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope\nE0919 14:44:57.893667       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope\nE0919 14:44:57.896421       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope\nE0919 14:44:57.896512       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope\nE0919 14:44:57.983411       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0919 14:44:57.984347       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope\nW0919 14:57:14.071606       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolume ended with: too old resource version: 22783 (29480)\nW0919 14:57:14.153042       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.Service ended with: too old resource version: 22785 (29481)\nW0919 14:57:14.381887       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.ReplicationController ended with: too old resource version: 25800 (29481)\nF0919 14:57:48.586731       1 server.go:247] leaderelection lost\n
Sep 19 15:00:33.549 E kube-apiserver failed contacting the API: Get https://api.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators?resourceVersion=32194&timeout=6m48s&timeoutSeconds=408&watch=true: dial tcp 35.237.0.38:6443: connect: connection refused
Sep 19 15:00:57.895 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 15:01:51.690 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:48.388308       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:48.388583       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:02.934193       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:02.934770       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:07.955157       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:07.955541       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:12.967696       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:12.968018       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:17.979713       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:17.980062       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:22.990283       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:22.990630       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:28.001200       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:28.001546       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:33.012677       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:33.013120       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Sep 19 15:01:51.690 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-controller-manager-6 container exited with code 255 (Error):  to 2020-09-20 14:14:17 +0000 UTC (now=2020-09-19 14:41:47.077668523 +0000 UTC))\nI0919 14:41:47.077682       1 clientca.go:93] [3] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-09-19 14:14:18 +0000 UTC to 2021-09-19 14:14:18 +0000 UTC (now=2020-09-19 14:41:47.077677537 +0000 UTC))\nI0919 14:41:47.077691       1 clientca.go:93] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-09-19 14:14:18 +0000 UTC to 2021-09-19 14:14:18 +0000 UTC (now=2020-09-19 14:41:47.077686955 +0000 UTC))\nI0919 14:41:47.083804       1 controllermanager.go:173] Version: v1.14.6+d7721aa\nI0919 14:41:47.086154       1 serving.go:196] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1600525403" (2020-09-19 14:23:41 +0000 UTC to 2022-09-19 14:23:42 +0000 UTC (now=2020-09-19 14:41:47.086116599 +0000 UTC))\nI0919 14:41:47.086193       1 serving.go:196] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1600525403" [] issuer="<self>" (2020-09-19 14:23:22 +0000 UTC to 2022-11-18 14:23:23 +0000 UTC (now=2020-09-19 14:41:47.086178812 +0000 UTC))\nI0919 14:41:47.086237       1 secure_serving.go:125] Serving securely on [::]:10257\nI0919 14:41:47.086290       1 serving.go:78] Starting DynamicLoader\nI0919 14:41:47.086909       1 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0919 15:00:32.762379       1 controllermanager.go:287] leaderelection lost\nI0919 15:00:32.762528       1 serving.go:89] Shutting down DynamicLoader\n
Sep 19 15:01:51.910 E ns/openshift-monitoring pod/prometheus-adapter-5599954b55-cq2rw node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I0919 14:44:06.330761       1 adapter.go:93] successfully using in-cluster auth\nI0919 14:44:07.254439       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 19 15:01:54.909 E ns/openshift-ingress pod/router-default-6bdff56b44-zfw2x node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:59:46.260343       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:59:51.267387       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 14:59:56.256023       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:00:01.256696       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:00:06.259547       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:00:11.259791       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:00:16.262296       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:00:21.261367       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:00:31.465623       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:00:36.463164       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:01:10.937778       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:01:15.930384       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:01:42.917423       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:01:47.915571       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Sep 19 15:01:57.905 E ns/openshift-console pod/downloads-64467b7b9c-7jfcj node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=download-server container exited with code 137 (Error): 0 14:59:08] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 14:59:10] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 14:59:18] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 14:59:20] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 14:59:28] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 14:59:30] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 14:59:38] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 14:59:40] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 14:59:48] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 14:59:50] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 14:59:58] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:00:00] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:00:08] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:00:10] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:00:18] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:00:20] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:00:28] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:00:30] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:00:38] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:00:40] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:00:48] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:00:50] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:00:58] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:01:00] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:01:08] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:01:10] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:01:18] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:01:20] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:01:28] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:01:30] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:01:38] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:01:40] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:01:48] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [19/Sep/2020 15:01:50] "GET / HTTP/1.1" 200 -\n
Sep 19 15:01:58.302 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=scheduler container exited with code 255 (Error):  on [::]:10251\nI0919 14:43:24.854122       1 serving.go:196] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1600525403" (2020-09-19 14:23:41 +0000 UTC to 2022-09-19 14:23:42 +0000 UTC (now=2020-09-19 14:43:24.854091737 +0000 UTC))\nI0919 14:43:24.854160       1 serving.go:196] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1600525403" [] issuer="<self>" (2020-09-19 14:23:22 +0000 UTC to 2022-11-18 14:23:23 +0000 UTC (now=2020-09-19 14:43:24.854145158 +0000 UTC))\nI0919 14:43:24.854177       1 secure_serving.go:125] Serving securely on [::]:10259\nI0919 14:43:24.854257       1 serving.go:78] Starting DynamicLoader\nI0919 14:43:25.821652       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0919 14:43:25.921903       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0919 14:43:25.921936       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0919 14:57:13.934708       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolume ended with: too old resource version: 18379 (29476)\nW0919 14:59:58.805187       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.StorageClass ended with: too old resource version: 18388 (31709)\nW0919 14:59:58.949668       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.ReplicationController ended with: too old resource version: 25800 (31710)\nW0919 15:00:04.417718       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 18380 (31761)\nF0919 15:00:32.758002       1 server.go:247] leaderelection lost\n
Sep 19 15:01:59.101 E ns/openshift-multus pod/multus-wjmbq node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 255 (Error): 
Sep 19 15:01:59.501 E ns/openshift-image-registry pod/node-ca-skbw5 node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=node-ca container exited with code 255 (Error): 
Sep 19 15:01:59.904 E ns/openshift-sdn pod/sdn-grmbq node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): rt\nI0919 15:00:18.524985   51918 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 15:00:18.630235   51918 proxier.go:367] userspace proxy: processing 0 service events\nI0919 15:00:18.630274   51918 proxier.go:346] userspace syncProxyRules took 105.265913ms\nI0919 15:00:18.630287   51918 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 15:00:31.419745   51918 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.129.0.79:5443 10.129.0.80:5443 10.130.0.16:5443]\nI0919 15:00:31.419855   51918 roundrobin.go:240] Delete endpoint 10.129.0.80:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0919 15:00:31.419963   51918 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 15:00:31.470436   51918 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.129.0.80:5443 10.130.0.16:5443]\nI0919 15:00:31.470474   51918 roundrobin.go:240] Delete endpoint 10.129.0.79:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0919 15:00:31.622361   51918 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 15:00:31.719284   51918 proxier.go:367] userspace proxy: processing 0 service events\nI0919 15:00:31.719315   51918 proxier.go:346] userspace syncProxyRules took 96.924053ms\nI0919 15:00:31.719327   51918 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 15:00:31.719336   51918 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 15:00:31.885023   51918 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 15:00:31.961197   51918 proxier.go:367] userspace proxy: processing 0 service events\nI0919 15:00:31.961226   51918 proxier.go:346] userspace syncProxyRules took 76.175978ms\nI0919 15:00:31.961239   51918 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\ninterrupt: Gracefully shutting down ...\n
Sep 19 15:02:00.301 E ns/openshift-dns pod/dns-default-7zf5q node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=dns container exited with code 255 (Error): .:5353\n2020-09-19T14:49:07.009Z [INFO] plugin/reload: Running configuration MD5 = acefebac40c697acb35b9e96ca3c7ec9\n2020-09-19T14:49:07.009Z [INFO] CoreDNS-1.5.2\n2020-09-19T14:49:07.009Z [INFO] linux/amd64, go1.12.9, \nCoreDNS-1.5.2\nlinux/amd64, go1.12.9, \nW0919 14:57:14.074351       1 reflector.go:289] github.com/coredns/coredns/plugin/kubernetes/controller.go:271: watch of *v1.Namespace ended with: too old resource version: 22784 (29480)\nW0919 14:57:14.149612       1 reflector.go:289] github.com/coredns/coredns/plugin/kubernetes/controller.go:264: watch of *v1.Service ended with: too old resource version: 22785 (29481)\nE0919 14:57:48.788709       1 reflector.go:270] github.com/coredns/coredns/plugin/kubernetes/controller.go:271: Failed to watch *v1.Namespace: Get https://172.30.0.1:443/api/v1/namespaces?resourceVersion=29480&timeout=5m39s&timeoutSeconds=339&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0919 14:57:48.788779       1 reflector.go:270] github.com/coredns/coredns/plugin/kubernetes/controller.go:266: Failed to watch *v1.Endpoints: Get https://172.30.0.1:443/api/v1/endpoints?resourceVersion=29628&timeout=6m17s&timeoutSeconds=377&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\n[INFO] SIGTERM: Shutting down servers then terminating\n
Sep 19 15:02:00.301 E ns/openshift-dns pod/dns-default-7zf5q node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (103) - No such process\n
Sep 19 15:02:00.702 E ns/openshift-machine-config-operator pod/machine-config-server-zqkgv node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=machine-config-server container exited with code 255 (Error): I0919 14:57:05.799322       1 start.go:38] Version: machine-config-daemon-4.2.33-202006040618-2-g204f5642-dirty (204f5642bd3adbfa5c85e74958223fb0cf8ad2db)\nI0919 14:57:05.800624       1 api.go:51] Launching server on :22624\nI0919 14:57:05.800751       1 api.go:51] Launching server on :22623\n
Sep 19 15:02:01.902 E ns/openshift-apiserver pod/apiserver-8rk6s node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=openshift-apiserver container exited with code 255 (Error): 8s_internal_local_delegation_chain_0000000008\nI0919 15:00:31.689768       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000009\nI0919 15:00:31.689813       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000011\nI0919 15:00:32.401308       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001\nI0919 15:00:32.401436       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000003\nI0919 15:00:32.401505       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000012\nI0919 15:00:32.401552       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000005\nI0919 15:00:32.401645       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000010\nI0919 15:00:32.401687       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002\nI0919 15:00:32.689758       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000004\nI0919 15:00:32.689898       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000006\nI0919 15:00:32.689962       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000007\nI0919 15:00:32.690010       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000008\nI0919 15:00:32.690052       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000009\nI0919 15:00:32.690140       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000011\n
Sep 19 15:02:02.301 E ns/openshift-multus pod/multus-admission-controller-h8d5v node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=multus-admission-controller container exited with code 255 (Error): 
Sep 19 15:02:02.702 E ns/openshift-cluster-node-tuning-operator pod/tuned-q7mhg node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=tuned container exited with code 255 (Error): ended profile (openshift-control-plane)\nI0919 15:00:00.488784   82829 openshift-tuned.go:263] Starting tuned...\n2020-09-19 15:00:00,649 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-09-19 15:00:00,657 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-09-19 15:00:00,658 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-09-19 15:00:00,660 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-09-19 15:00:00,661 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-09-19 15:00:00,720 INFO     tuned.daemon.controller: starting controller\n2020-09-19 15:00:00,720 INFO     tuned.daemon.daemon: starting tuning\n2020-09-19 15:00:00,727 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-09-19 15:00:00,728 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-09-19 15:00:00,733 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-19 15:00:00,734 INFO     tuned.plugins.base: instance disk: assigning devices sda\n2020-09-19 15:00:00,736 INFO     tuned.plugins.base: instance net: assigning devices ens4\n2020-09-19 15:00:00,843 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-19 15:00:00,845 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0919 15:00:04.485794   82829 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal) labels changed node wide: true\nI0919 15:00:05.304358   82829 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 15:00:05.308938   82829 openshift-tuned.go:441] Getting recommended profile...\nI0919 15:00:05.674956   82829 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Sep 19 15:02:03.501 E ns/openshift-controller-manager pod/controller-manager-p9zwv node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 255 (Error): 
Sep 19 15:02:06.302 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=etcd-member container exited with code 255 (Error):  (stream MsgApp v2 reader)\n2020-09-19 15:00:33.264414 I | rafthttp: stopped streaming with peer 296bf7dcc71524af (stream MsgApp v2 reader)\n2020-09-19 15:00:33.264486 W | rafthttp: lost the TCP streaming connection with peer 296bf7dcc71524af (stream Message reader)\n2020-09-19 15:00:33.264502 E | rafthttp: failed to read 296bf7dcc71524af on stream Message (context canceled)\n2020-09-19 15:00:33.264508 I | rafthttp: peer 296bf7dcc71524af became inactive (message send to peer failed)\n2020-09-19 15:00:33.264515 I | rafthttp: stopped streaming with peer 296bf7dcc71524af (stream Message reader)\n2020-09-19 15:00:33.264528 I | rafthttp: stopped peer 296bf7dcc71524af\n2020-09-19 15:00:33.264535 I | rafthttp: stopping peer 5c712ff1661feac3...\n2020-09-19 15:00:33.264867 I | rafthttp: closed the TCP streaming connection with peer 5c712ff1661feac3 (stream MsgApp v2 writer)\n2020-09-19 15:00:33.264890 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (writer)\n2020-09-19 15:00:33.265219 I | rafthttp: closed the TCP streaming connection with peer 5c712ff1661feac3 (stream Message writer)\n2020-09-19 15:00:33.265236 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (writer)\n2020-09-19 15:00:33.265265 I | rafthttp: stopped HTTP pipelining with peer 5c712ff1661feac3\n2020-09-19 15:00:33.265375 W | rafthttp: lost the TCP streaming connection with peer 5c712ff1661feac3 (stream MsgApp v2 reader)\n2020-09-19 15:00:33.265386 E | rafthttp: failed to read 5c712ff1661feac3 on stream MsgApp v2 (context canceled)\n2020-09-19 15:00:33.265406 I | rafthttp: peer 5c712ff1661feac3 became inactive (message send to peer failed)\n2020-09-19 15:00:33.265413 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (stream MsgApp v2 reader)\n2020-09-19 15:00:33.265461 W | rafthttp: lost the TCP streaming connection with peer 5c712ff1661feac3 (stream Message reader)\n2020-09-19 15:00:33.265471 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (stream Message reader)\n2020-09-19 15:00:33.265478 I | rafthttp: stopped peer 5c712ff1661feac3\n
Sep 19 15:02:06.302 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=etcd-metrics container exited with code 255 (Error): 2020-09-19 15:00:12.602101 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 15:00:12.603046 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 15:00:12.603538 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 15:00:12.623956 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Sep 19 15:02:06.702 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:48.388308       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:48.388583       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:02.934193       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:02.934770       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:07.955157       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:07.955541       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:12.967696       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:12.968018       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:17.979713       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:17.980062       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:22.990283       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:22.990630       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:28.001200       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:28.001546       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:33.012677       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:33.013120       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Sep 19 15:02:06.702 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-controller-manager-6 container exited with code 255 (Error):  to 2020-09-20 14:14:17 +0000 UTC (now=2020-09-19 14:41:47.077668523 +0000 UTC))\nI0919 14:41:47.077682       1 clientca.go:93] [3] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-09-19 14:14:18 +0000 UTC to 2021-09-19 14:14:18 +0000 UTC (now=2020-09-19 14:41:47.077677537 +0000 UTC))\nI0919 14:41:47.077691       1 clientca.go:93] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-09-19 14:14:18 +0000 UTC to 2021-09-19 14:14:18 +0000 UTC (now=2020-09-19 14:41:47.077686955 +0000 UTC))\nI0919 14:41:47.083804       1 controllermanager.go:173] Version: v1.14.6+d7721aa\nI0919 14:41:47.086154       1 serving.go:196] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1600525403" (2020-09-19 14:23:41 +0000 UTC to 2022-09-19 14:23:42 +0000 UTC (now=2020-09-19 14:41:47.086116599 +0000 UTC))\nI0919 14:41:47.086193       1 serving.go:196] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1600525403" [] issuer="<self>" (2020-09-19 14:23:22 +0000 UTC to 2022-11-18 14:23:23 +0000 UTC (now=2020-09-19 14:41:47.086178812 +0000 UTC))\nI0919 14:41:47.086237       1 secure_serving.go:125] Serving securely on [::]:10257\nI0919 14:41:47.086290       1 serving.go:78] Starting DynamicLoader\nI0919 14:41:47.086909       1 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0919 15:00:32.762379       1 controllermanager.go:287] leaderelection lost\nI0919 15:00:32.762528       1 serving.go:89] Shutting down DynamicLoader\n
Sep 19 15:02:07.904 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-7 container exited with code 255 (Error):      1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io\nI0919 15:00:32.117591       1 controller.go:107] OpenAPI AggregationController: Processing item v1.user.openshift.io\nI0919 15:00:32.717119       1 controller.go:107] OpenAPI AggregationController: Processing item v1.authorization.openshift.io\nI0919 15:00:32.721999       1 log.go:172] http: TLS handshake error from 10.0.0.5:51442: read tcp 10.0.0.5:6443->10.0.0.5:51442: read: connection reset by peer\nI0919 15:00:32.751626       1 genericapiserver.go:546] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0919 15:00:32.751903       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nW0919 15:00:32.776793       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.0.2 10.0.0.3]\nI0919 15:00:32.803700       1 genericapiserver.go:546] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\n2020/09/19 15:00:32 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/09/19 15:00:32 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/09/19 15:00:32 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/09/19 15:00:32 httputil: ReverseProxy read error during body copy: unexpected EOF\nE0919 15:00:32.850540       1 reflector.go:270] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io)\n
Sep 19 15:02:07.904 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-insecure-readyz-7 container exited with code 255 (Error): I0919 14:40:51.672912       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Sep 19 15:02:07.904 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): nal-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nW0919 14:54:44.248234       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 23669 (28069)\nI0919 14:54:45.254025       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:54:45.255055       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 14:54:45.651782       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:54:45.652245       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Sep 19 15:02:08.593 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): =info ts=2020-09-19T15:02:06.184Z caller=main.go:332 fd_limits="(soft=1048576, hard=1048576)"\nlevel=info ts=2020-09-19T15:02:06.184Z caller=main.go:333 vm_limits="(soft=unlimited, hard=unlimited)"\nlevel=info ts=2020-09-19T15:02:06.190Z caller=main.go:652 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T15:02:06.190Z caller=web.go:448 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T15:02:06.203Z caller=main.go:667 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T15:02:06.203Z caller=main.go:668 msg="TSDB started"\nlevel=info ts=2020-09-19T15:02:06.203Z caller=main.go:738 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T15:02:06.203Z caller=main.go:521 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T15:02:06.203Z caller=main.go:535 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T15:02:06.203Z caller=main.go:557 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T15:02:06.203Z caller=main.go:531 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T15:02:06.203Z caller=main.go:517 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T15:02:06.203Z caller=manager.go:776 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T15:02:06.203Z caller=manager.go:782 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T15:02:06.203Z caller=main.go:551 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T15:02:06.206Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T15:02:06.206Z caller=main.go:722 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19T15:02:06.207Z caller=main.go:731 err="error loading config from \"/etc/prometheus/config_out/prometheus.env.yaml\": couldn't load configuration (--config.file=\"/etc/prometheus/config_out/prometheus.env.yaml\"): open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 19 15:02:11.108 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 14:57:48.388308       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:57:48.388583       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:02.934193       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:02.934770       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:07.955157       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:07.955541       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:12.967696       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:12.968018       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:17.979713       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:17.980062       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:22.990283       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:22.990630       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:28.001200       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:28.001546       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:00:33.012677       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:00:33.013120       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Sep 19 15:02:11.108 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-controller-manager-6 container exited with code 255 (Error):  to 2020-09-20 14:14:17 +0000 UTC (now=2020-09-19 14:41:47.077668523 +0000 UTC))\nI0919 14:41:47.077682       1 clientca.go:93] [3] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-09-19 14:14:18 +0000 UTC to 2021-09-19 14:14:18 +0000 UTC (now=2020-09-19 14:41:47.077677537 +0000 UTC))\nI0919 14:41:47.077691       1 clientca.go:93] [4] "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2020-09-19 14:14:18 +0000 UTC to 2021-09-19 14:14:18 +0000 UTC (now=2020-09-19 14:41:47.077686955 +0000 UTC))\nI0919 14:41:47.083804       1 controllermanager.go:173] Version: v1.14.6+d7721aa\nI0919 14:41:47.086154       1 serving.go:196] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1600525403" (2020-09-19 14:23:41 +0000 UTC to 2022-09-19 14:23:42 +0000 UTC (now=2020-09-19 14:41:47.086116599 +0000 UTC))\nI0919 14:41:47.086193       1 serving.go:196] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1600525403" [] issuer="<self>" (2020-09-19 14:23:22 +0000 UTC to 2022-11-18 14:23:23 +0000 UTC (now=2020-09-19 14:41:47.086178812 +0000 UTC))\nI0919 14:41:47.086237       1 secure_serving.go:125] Serving securely on [::]:10257\nI0919 14:41:47.086290       1 serving.go:78] Starting DynamicLoader\nI0919 14:41:47.086909       1 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0919 15:00:32.762379       1 controllermanager.go:287] leaderelection lost\nI0919 15:00:32.762528       1 serving.go:89] Shutting down DynamicLoader\n
Sep 19 15:02:11.502 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=etcd-member container exited with code 255 (Error):  (stream MsgApp v2 reader)\n2020-09-19 15:00:33.264414 I | rafthttp: stopped streaming with peer 296bf7dcc71524af (stream MsgApp v2 reader)\n2020-09-19 15:00:33.264486 W | rafthttp: lost the TCP streaming connection with peer 296bf7dcc71524af (stream Message reader)\n2020-09-19 15:00:33.264502 E | rafthttp: failed to read 296bf7dcc71524af on stream Message (context canceled)\n2020-09-19 15:00:33.264508 I | rafthttp: peer 296bf7dcc71524af became inactive (message send to peer failed)\n2020-09-19 15:00:33.264515 I | rafthttp: stopped streaming with peer 296bf7dcc71524af (stream Message reader)\n2020-09-19 15:00:33.264528 I | rafthttp: stopped peer 296bf7dcc71524af\n2020-09-19 15:00:33.264535 I | rafthttp: stopping peer 5c712ff1661feac3...\n2020-09-19 15:00:33.264867 I | rafthttp: closed the TCP streaming connection with peer 5c712ff1661feac3 (stream MsgApp v2 writer)\n2020-09-19 15:00:33.264890 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (writer)\n2020-09-19 15:00:33.265219 I | rafthttp: closed the TCP streaming connection with peer 5c712ff1661feac3 (stream Message writer)\n2020-09-19 15:00:33.265236 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (writer)\n2020-09-19 15:00:33.265265 I | rafthttp: stopped HTTP pipelining with peer 5c712ff1661feac3\n2020-09-19 15:00:33.265375 W | rafthttp: lost the TCP streaming connection with peer 5c712ff1661feac3 (stream MsgApp v2 reader)\n2020-09-19 15:00:33.265386 E | rafthttp: failed to read 5c712ff1661feac3 on stream MsgApp v2 (context canceled)\n2020-09-19 15:00:33.265406 I | rafthttp: peer 5c712ff1661feac3 became inactive (message send to peer failed)\n2020-09-19 15:00:33.265413 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (stream MsgApp v2 reader)\n2020-09-19 15:00:33.265461 W | rafthttp: lost the TCP streaming connection with peer 5c712ff1661feac3 (stream Message reader)\n2020-09-19 15:00:33.265471 I | rafthttp: stopped streaming with peer 5c712ff1661feac3 (stream Message reader)\n2020-09-19 15:00:33.265478 I | rafthttp: stopped peer 5c712ff1661feac3\n
Sep 19 15:02:11.502 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=etcd-metrics container exited with code 255 (Error): 2020-09-19 15:00:12.602101 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 15:00:12.603046 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 15:00:12.603538 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 15:00:12.623956 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Sep 19 15:02:11.903 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=scheduler container exited with code 255 (Error):  on [::]:10251\nI0919 14:43:24.854122       1 serving.go:196] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1600525403" (2020-09-19 14:23:41 +0000 UTC to 2022-09-19 14:23:42 +0000 UTC (now=2020-09-19 14:43:24.854091737 +0000 UTC))\nI0919 14:43:24.854160       1 serving.go:196] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1600525403" [] issuer="<self>" (2020-09-19 14:23:22 +0000 UTC to 2022-11-18 14:23:23 +0000 UTC (now=2020-09-19 14:43:24.854145158 +0000 UTC))\nI0919 14:43:24.854177       1 secure_serving.go:125] Serving securely on [::]:10259\nI0919 14:43:24.854257       1 serving.go:78] Starting DynamicLoader\nI0919 14:43:25.821652       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0919 14:43:25.921903       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0919 14:43:25.921936       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0919 14:57:13.934708       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolume ended with: too old resource version: 18379 (29476)\nW0919 14:59:58.805187       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.StorageClass ended with: too old resource version: 18388 (31709)\nW0919 14:59:58.949668       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.ReplicationController ended with: too old resource version: 25800 (31710)\nW0919 15:00:04.417718       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 18380 (31761)\nF0919 15:00:32.758002       1 server.go:247] leaderelection lost\n
Sep 19 15:02:12.302 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-7 container exited with code 255 (Error):      1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io\nI0919 15:00:32.117591       1 controller.go:107] OpenAPI AggregationController: Processing item v1.user.openshift.io\nI0919 15:00:32.717119       1 controller.go:107] OpenAPI AggregationController: Processing item v1.authorization.openshift.io\nI0919 15:00:32.721999       1 log.go:172] http: TLS handshake error from 10.0.0.5:51442: read tcp 10.0.0.5:6443->10.0.0.5:51442: read: connection reset by peer\nI0919 15:00:32.751626       1 genericapiserver.go:546] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0919 15:00:32.751903       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\nW0919 15:00:32.776793       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.0.2 10.0.0.3]\nI0919 15:00:32.803700       1 genericapiserver.go:546] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\n2020/09/19 15:00:32 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/09/19 15:00:32 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/09/19 15:00:32 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/09/19 15:00:32 httputil: ReverseProxy read error during body copy: unexpected EOF\nE0919 15:00:32.850540       1 reflector.go:270] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io)\n
Sep 19 15:02:12.302 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-insecure-readyz-7 container exited with code 255 (Error): I0919 14:40:51.672912       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Sep 19 15:02:12.302 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error): nal-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nW0919 14:54:44.248234       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 23669 (28069)\nI0919 14:54:45.254025       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:54:45.255055       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 14:54:45.651782       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 14:54:45.652245       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Sep 19 15:02:15.721 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Sep 19 15:02:23.811 E ns/openshift-authentication-operator pod/authentication-operator-6b5489b97c-st4w8 node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=operator container exited with code 255 (Error): e":"Degraded"},{"lastTransitionTime":"2020-09-19T15:01:28Z","message":"Progressing: not all deployment replicas are ready","reason":"ProgressingOAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-09-19T14:35:45Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-19T14:26:56Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0919 15:01:52.974423       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"788bf7fe-fa83-11ea-94fb-42010a000004", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: dial tcp 35.229.37.94:443: connect: connection refused" to ""\nI0919 15:01:56.176942       1 status_controller.go:165] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-19T14:26:56Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-19T15:01:56Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-19T14:35:45Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-19T14:26:56Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0919 15:01:56.187871       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"788bf7fe-fa83-11ea-94fb-42010a000004", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from True to False ("")\nI0919 15:02:15.036337       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 15:02:15.036494       1 leaderelection.go:66] leaderelection lost\n
Sep 19 15:02:25.586 E ns/openshift-insights pod/insights-operator-55b7595ccc-d8hdb node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=operator container exited with code 2 (Error): Error on reading termination message from logs: failed to open log file "/var/log/pods/openshift-insights_insights-operator-55b7595ccc-d8hdb_baa44897-fa88-11ea-927e-42010a000002/operator/0.log": open /var/log/pods/openshift-insights_insights-operator-55b7595ccc-d8hdb_baa44897-fa88-11ea-927e-42010a000002/operator/0.log: no such file or directory
Sep 19 15:02:26.193 E ns/openshift-monitoring pod/cluster-monitoring-operator-75d966b464-24d55 node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=cluster-monitoring-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 15:02:26.591 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-69b575cf4b-d2c8g node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-controller-manager-operator container exited with code 255 (Error): {client-ca false} {trusted-ca-bundle true}]\\nI0919 15:00:28.001546       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\\nI0919 15:00:33.012677       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\\nI0919 15:00:33.013120       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\\n\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal pods/kube-controller-manager-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=\"kube-controller-manager-6\" is not ready"\nI0919 15:02:17.255500       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"6cc25ea7-fa83-11ea-94fb-42010a000004", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal pods/kube-controller-manager-ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=\"kube-controller-manager-6\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0919 15:02:17.619501       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"6cc25ea7-fa83-11ea-94fb-42010a000004", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-6-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal -n openshift-kube-controller-manager because it was missing\nI0919 15:02:18.092613       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 15:02:18.092754       1 leaderelection.go:66] leaderelection lost\n
Sep 19 15:02:27.794 E ns/openshift-machine-config-operator pod/machine-config-operator-dd4cb4788-dtjsn node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=machine-config-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 15:02:38.588 E ns/openshift-console pod/console-658dc84989-mlvb2 node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=console container exited with code 2 (Error): 2020/09/19 14:45:52 cmd/main: cookies are secure!\n2020/09/19 14:45:52 cmd/main: Binding to 0.0.0.0:8443...\n2020/09/19 14:45:52 cmd/main: using TLS\n2020/09/19 14:47:04 http: TLS handshake error from 10.128.2.22:60498: read tcp 10.129.0.56:8443->10.128.2.22:60498: read: connection reset by peer\n
Sep 19 15:02:45.787 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-b7c5f4b4b-5qb2b node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=operator container exited with code 255 (Error): atus): Object 'Kind' is missing in '{\n  "paths": [\n    "/apis",\n    "/metrics",\n    "/version"\n  ]\n}'\nI0919 15:01:11.239900       1 workload_controller.go:325] No service bindings found, nothing to delete.\nI0919 15:01:11.397628       1 request.go:530] Throttling request took 157.66375ms, request: DELETE:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver\nI0919 15:01:11.597611       1 request.go:530] Throttling request took 196.675515ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-catalog-apiserver\nI0919 15:01:11.603286       1 workload_controller.go:179] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0919 15:01:19.807832       1 leaderelection.go:258] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0919 15:01:29.817256       1 leaderelection.go:258] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0919 15:01:39.828170       1 leaderelection.go:258] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0919 15:01:50.075635       1 leaderelection.go:258] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0919 15:02:00.091530       1 leaderelection.go:258] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0919 15:02:10.103140       1 leaderelection.go:258] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0919 15:02:20.126044       1 leaderelection.go:258] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0919 15:02:24.214390       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0919 15:02:24.214460       1 leaderelection.go:66] leaderelection lost\n
Sep 19 15:02:52.301 E ns/openshift-operator-lifecycle-manager pod/packageserver-7d6cb958db-vdf49 node/ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 15:03:06.897 E kube-apiserver Kube API started failing: Get https://api.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces/kube-system?timeout=3s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 19 15:03:12.895 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 15:03:37.934 E kube-apiserver failed contacting the API: Get https://api.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:6443/api/v1/pods?resourceVersion=35321&timeout=5m41s&timeoutSeconds=341&watch=true: dial tcp 35.237.0.38:6443: connect: connection refused
Sep 19 15:03:57.896 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 15:04:00.480 E ns/openshift-image-registry pod/node-ca-v5hln node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=node-ca container exited with code 255 (Error): 
Sep 19 15:04:00.865 E ns/openshift-monitoring pod/node-exporter-r2ggg node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 255 (Error): 20-09-19T14:45:53Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T14:45:53Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 15:04:00.865 E ns/openshift-monitoring pod/node-exporter-r2ggg node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 255 (Error): I0919 14:45:53.745804   58448 main.go:241] Reading certificate files\nI0919 14:45:53.746146   58448 main.go:274] Starting TCP socket on 10.0.32.3:9100\nI0919 14:45:53.746353   58448 main.go:281] Listening securely on 10.0.32.3:9100\nI0919 15:02:39.887792   58448 main.go:336] received interrupt, shutting down\nE0919 15:02:39.888094   58448 main.go:289] failed to gracefully close secure listener: close tcp 10.0.32.3:9100: use of closed network connection\n
Sep 19 15:04:01.259 E ns/openshift-multus pod/multus-fbwc6 node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 255 (Error): 
Sep 19 15:04:01.660 E ns/openshift-sdn pod/sdn-z6b8m node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): manager/v1-packages-operators-coreos-com: to [10.130.0.16:5443]\nI0919 15:02:35.982990   67564 roundrobin.go:240] Delete endpoint 10.130.0.16:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0919 15:02:36.020833   67564 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 15:02:36.092756   67564 proxier.go:367] userspace proxy: processing 0 service events\nI0919 15:02:36.092794   67564 proxier.go:346] userspace syncProxyRules took 71.936951ms\nI0919 15:02:36.092807   67564 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 15:02:36.092819   67564 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 15:02:36.092846   67564 service.go:332] Adding new service port "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:" at 172.30.115.142:443/TCP\nI0919 15:02:36.213870   67564 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics:https-metrics to [10.130.0.25:8081]\nI0919 15:02:36.213905   67564 roundrobin.go:240] Delete endpoint 10.130.0.25:8081 for service "openshift-operator-lifecycle-manager/olm-operator-metrics:https-metrics"\nI0919 15:02:36.244420   67564 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 15:02:36.319879   67564 proxier.go:367] userspace proxy: processing 0 service events\nI0919 15:02:36.319911   67564 proxier.go:346] userspace syncProxyRules took 75.467178ms\nI0919 15:02:36.319923   67564 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 15:02:36.319934   67564 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 15:02:36.475269   67564 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 15:02:36.545756   67564 proxier.go:367] userspace proxy: processing 0 service events\nI0919 15:02:36.545778   67564 proxier.go:346] userspace syncProxyRules took 70.482734ms\nI0919 15:02:36.545788   67564 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\ninterrupt: Gracefully shutting down ...\n
Sep 19 15:04:02.059 E ns/openshift-sdn pod/ovs-jbx8g node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=openvswitch container exited with code 255 (Error): 17 on port 13\n2020-09-19T15:01:50.598Z|00140|connmgr|INFO|br0<->unix#210: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T15:01:50.670Z|00141|connmgr|INFO|br0<->unix#213: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T15:01:51.279Z|00005|jsonrpc|WARN|unix#169: receive error: Connection reset by peer\n2020-09-19T15:01:51.279Z|00006|reconnect|WARN|unix#169: connection dropped (Connection reset by peer)\n2020-09-19T15:01:50.709Z|00142|bridge|INFO|bridge br0: deleted interface veth92872af6 on port 11\n2020-09-19T15:01:50.760Z|00143|connmgr|INFO|br0<->unix#216: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T15:01:50.823Z|00144|connmgr|INFO|br0<->unix#219: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T15:01:50.861Z|00145|bridge|INFO|bridge br0: deleted interface vethbf1abfc2 on port 9\n2020-09-19T15:01:50.905Z|00146|connmgr|INFO|br0<->unix#222: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T15:01:50.975Z|00147|connmgr|INFO|br0<->unix#225: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T15:01:51.010Z|00148|bridge|INFO|bridge br0: deleted interface veth1da1fcba on port 10\n2020-09-19T15:01:51.059Z|00149|connmgr|INFO|br0<->unix#228: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T15:01:51.129Z|00150|connmgr|INFO|br0<->unix#231: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T15:01:51.163Z|00151|bridge|INFO|bridge br0: deleted interface veth6b38e573 on port 14\n2020-09-19T15:01:51.214Z|00152|connmgr|INFO|br0<->unix#234: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T15:01:51.255Z|00153|connmgr|INFO|br0<->unix#237: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T15:01:51.286Z|00154|bridge|INFO|bridge br0: deleted interface veth9b728a57 on port 5\n2020-09-19T15:01:51.738Z|00155|connmgr|INFO|br0<->unix#240: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T15:01:51.769Z|00156|connmgr|INFO|br0<->unix#243: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T15:01:51.795Z|00157|bridge|INFO|bridge br0: deleted interface veth31bde131 on port 12\novs-vswitchd is not running.\novsdb-server is not running.\n
Sep 19 15:04:02.459 E ns/openshift-machine-config-operator pod/machine-config-daemon-c72ln node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=machine-config-daemon container exited with code 255 (Error): 549ee6b0ce2560c145210d44072ede57d27571889f98e4\nI0919 15:02:01.801147   99162 run.go:16] Running: podman pull -q --authfile /var/lib/kubelet/config.json registry.svc.ci.openshift.org/ocp/4.2-2020-09-19-113420@sha256:d0bf79a4730992d8e2a47ed580c81b50841a2180db4e3f81ad8d2b91cf153eb4\n2020-09-19 15:02:02.033007819 +0000 UTC m=+0.102172983 system refresh\n2020-09-19 15:02:27.248974792 +0000 UTC m=+25.318139977 image pull  \ne205650b2a4292cc6189b48a52919ade9be095d6914aba1a3273eef4b4ddfb72\nI0919 15:02:27.258327   99162 rpm-ostree.go:357] Running captured: podman inspect --type=image registry.svc.ci.openshift.org/ocp/4.2-2020-09-19-113420@sha256:d0bf79a4730992d8e2a47ed580c81b50841a2180db4e3f81ad8d2b91cf153eb4\nI0919 15:02:27.358118   99162 rpm-ostree.go:357] Running captured: podman create --net=none --annotation=org.openshift.machineconfigoperator.pivot=true --name ostree-container-pivot-21d966c7-fa89-11ea-b695-42010a002003 registry.svc.ci.openshift.org/ocp/4.2-2020-09-19-113420@sha256:d0bf79a4730992d8e2a47ed580c81b50841a2180db4e3f81ad8d2b91cf153eb4\nI0919 15:02:27.487326   99162 rpm-ostree.go:357] Running captured: podman mount e22eaa5438a97f0d800c49fe89f3fbfbaa937f40188051e086f30891f8732ef7\nI0919 15:02:27.591310   99162 rpm-ostree.go:238] Pivoting to: 42.81.20200919.0 (2e2d1f95863f994a79706245ce124c822940b7f6eec902b4ddf185ad36d7e601)\nclient(id:cli dbus:1.569 unit:machine-config-daemon-host.service uid:0) added; new total=1\nInitiated txn UpdateDeployment for client(id:cli dbus:1.569 unit:machine-config-daemon-host.service uid:0): /org/projectatomic/rpmostree1/rhcos\nsanitycheck(/usr/bin/true) successful\nTxn UpdateDeployment on /org/projectatomic/rpmostree1/rhcos successful\nclient(id:cli dbus:1.569 unit:machine-config-daemon-host.service uid:0) vanished; remaining=0\nIn idle state; will auto-exit in 60 seconds\nI0919 15:02:39.812388   77938 update.go:993] initiating reboot: Node will reboot into config rendered-worker-acfc632a5976a508e7fadb80fb0dc731\nI0919 15:02:39.885757   77938 daemon.go:505] Shutting down MachineConfigDaemon\n
Sep 19 15:04:02.861 E ns/openshift-dns pod/dns-default-lpjml node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=dns container exited with code 255 (Error): .:5353\n2020-09-19T14:48:38.545Z [INFO] plugin/reload: Running configuration MD5 = acefebac40c697acb35b9e96ca3c7ec9\n2020-09-19T14:48:38.546Z [INFO] CoreDNS-1.5.2\n2020-09-19T14:48:38.546Z [INFO] linux/amd64, go1.12.9, \nCoreDNS-1.5.2\nlinux/amd64, go1.12.9, \nW0919 14:57:14.071429       1 reflector.go:289] github.com/coredns/coredns/plugin/kubernetes/controller.go:271: watch of *v1.Namespace ended with: too old resource version: 22784 (29480)\nW0919 14:57:14.151044       1 reflector.go:289] github.com/coredns/coredns/plugin/kubernetes/controller.go:264: watch of *v1.Service ended with: too old resource version: 22785 (29481)\nW0919 14:57:48.781575       1 reflector.go:289] github.com/coredns/coredns/plugin/kubernetes/controller.go:271: watch of *v1.Namespace ended with: very short watch: github.com/coredns/coredns/plugin/kubernetes/controller.go:271: Unexpected watch close - watch lasted less than a second and no items received\nW0919 14:57:48.781582       1 reflector.go:289] github.com/coredns/coredns/plugin/kubernetes/controller.go:266: watch of *v1.Endpoints ended with: very short watch: github.com/coredns/coredns/plugin/kubernetes/controller.go:266: Unexpected watch close - watch lasted less than a second and no items received\nW0919 15:00:33.935957       1 reflector.go:289] github.com/coredns/coredns/plugin/kubernetes/controller.go:271: watch of *v1.Namespace ended with: too old resource version: 18380 (30226)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Sep 19 15:04:02.861 E ns/openshift-dns pod/dns-default-lpjml node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (119) - No such process\n
Sep 19 15:04:03.258 E ns/openshift-cluster-node-tuning-operator pod/tuned-kx6wz node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=tuned container exited with code 255 (Error): _cpu: We are running on an x86 GenuineIntel platform\n2020-09-19 15:00:00,739 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-19 15:00:00,741 INFO     tuned.plugins.base: instance disk: assigning devices sda\n2020-09-19 15:00:00,743 INFO     tuned.plugins.base: instance net: assigning devices ens4\n2020-09-19 15:00:00,815 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-19 15:00:00,816 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0919 15:01:52.911336   92367 openshift-tuned.go:550] Pod (openshift-monitoring/openshift-state-metrics-6f8d898bfc-vt52m) labels changed node wide: true\nI0919 15:01:55.433869   92367 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 15:01:55.440218   92367 openshift-tuned.go:441] Getting recommended profile...\nI0919 15:01:55.565457   92367 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 15:01:55.916471   92367 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0919 15:02:00.433841   92367 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 15:02:00.436593   92367 openshift-tuned.go:441] Getting recommended profile...\nI0919 15:02:00.560675   92367 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 15:02:01.108649   92367 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-adapter-5599954b55-cq2rw) labels changed node wide: true\nI0919 15:02:05.433740   92367 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 15:02:05.436677   92367 openshift-tuned.go:441] Getting recommended profile...\nI0919 15:02:05.618478   92367 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Sep 19 15:04:03.661 E ns/openshift-monitoring pod/node-exporter-r2ggg node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal invariant violation: pod may not transition Running->Pending
Sep 19 15:04:36.527 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2020/09/19 14:57:12 Watching directory: "/etc/alertmanager/config"\n
Sep 19 15:04:37.728 E ns/openshift-ingress pod/router-default-6bdff56b44-k69px node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:02:47.918228       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:02:52.915151       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:02:57.913708       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:03:02.912847       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nW0919 15:03:04.685116       1 reflector.go:341] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nI0919 15:03:10.740262       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:03:21.385520       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0919 15:03:26.386810       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nE0919 15:03:37.917710       1 reflector.go:322] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: Failed to watch *v1.Route: Get https://172.30.0.1:443/apis/route.openshift.io/v1/routes?resourceVersion=34982&timeoutSeconds=494&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0919 15:03:37.919486       1 reflector.go:322] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: Failed to watch *v1.Endpoints: Get https://172.30.0.1:443/api/v1/endpoints?resourceVersion=35198&timeoutSeconds=590&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0919 15:03:40.791701       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Sep 19 15:04:38.326 E ns/openshift-console pod/downloads-64467b7b9c-v5st2 node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 15:04:39.532 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/09/19 14:57:20 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 19 15:04:39.532 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=prometheus-proxy container exited with code 2 (Error): 2020/09/19 14:57:20 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 14:57:20 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 14:57:20 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 14:57:20 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/19 14:57:20 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 14:57:20 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 14:57:20 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 14:57:20 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/19 14:57:20 http.go:106: HTTPS: listening on [::]:9091\n
Sep 19 15:04:39.532 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-19T14:57:19.732590566Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-09-19T14:57:19.749179472Z caller=runutil.go:88 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-19T14:57:24.861246631Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-19T14:57:24.861377793Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Sep 19 15:04:41.327 E ns/openshift-monitoring pod/kube-state-metrics-dd97588bc-glvsc node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=kube-state-metrics container exited with code 2 (Error): 
Sep 19 15:04:57.895 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 15:05:03.915 E ns/openshift-apiserver pod/apiserver-d28c6 node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=openshift-apiserver container exited with code 255 (Error): 8s_internal_local_delegation_chain_0000000010\nI0919 15:03:35.506870       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000011\nI0919 15:03:35.665480       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000012\nI0919 15:03:36.506320       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001\nI0919 15:03:36.506615       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002\nI0919 15:03:36.506738       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000003\nI0919 15:03:36.506814       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000004\nI0919 15:03:36.506854       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000007\nI0919 15:03:36.507019       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000008\nI0919 15:03:36.507120       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000005\nI0919 15:03:36.507169       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000006\nI0919 15:03:36.507208       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000009\nI0919 15:03:36.507306       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000010\nI0919 15:03:36.507360       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000011\nI0919 15:03:36.665809       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000012\n
Sep 19 15:05:04.300 E ns/openshift-image-registry pod/node-ca-4k4pz node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=node-ca container exited with code 255 (Error): 
Sep 19 15:05:04.703 E ns/openshift-monitoring pod/node-exporter-zfpzj node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 255 (Error): 20-09-19T14:43:55Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T14:43:55Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 15:05:04.703 E ns/openshift-monitoring pod/node-exporter-zfpzj node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 255 (Error): I0919 14:43:59.609560   36603 main.go:241] Reading certificate files\nI0919 14:43:59.609884   36603 main.go:274] Starting TCP socket on 10.0.0.2:9100\nI0919 14:43:59.610107   36603 main.go:281] Listening securely on 10.0.0.2:9100\nI0919 15:03:37.802026   36603 main.go:336] received interrupt, shutting down\nE0919 15:03:37.802461   36603 main.go:289] failed to gracefully close secure listener: close tcp 10.0.0.2:9100: use of closed network connection\n
Sep 19 15:05:05.101 E ns/openshift-controller-manager pod/controller-manager-wdvzd node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 255 (Error): 
Sep 19 15:05:05.501 E ns/openshift-sdn pod/sdn-controller-xnhsc node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=sdn-controller container exited with code 255 (Error): ", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"73cf9321-fa83-11ea-94fb-42010a000004", ResourceVersion:"32610", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63736122107, loc:(*time.Location)(0x2782ae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-09-19T15:01:44Z\",\"renewTime\":\"2020-09-19T15:01:44Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:29"'. Will not report event: 'Normal' 'LeaderElection' 'ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal became leader'\nI0919 15:01:44.316564       1 leaderelection.go:227] successfully acquired lease openshift-sdn/openshift-network-controller\nI0919 15:01:44.323252       1 master.go:52] Initializing SDN master\nI0919 15:01:44.345061       1 network_controller.go:60] Started OpenShift Network Controller\nW0919 15:03:10.589056       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Namespace ended with: too old resource version: 19577 (34982)\nW0919 15:03:10.606507       1 reflector.go:289] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 29475 (34982)\nW0919 15:03:10.641094       1 reflector.go:289] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 19625 (34982)\n
Sep 19 15:05:05.901 E ns/openshift-multus pod/multus-admission-controller-6rn26 node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=multus-admission-controller container exited with code 255 (Error): 
Sep 19 15:05:06.302 E ns/openshift-cluster-node-tuning-operator pod/tuned-2xl2d node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=tuned container exited with code 255 (Error): 0919 15:02:38.793397   80164 openshift-tuned.go:550] Pod (openshift-console/console-658dc84989-mlvb2) labels changed node wide: true\nI0919 15:02:43.561605   80164 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 15:02:43.564700   80164 openshift-tuned.go:441] Getting recommended profile...\nI0919 15:02:43.716458   80164 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 15:02:43.717134   80164 openshift-tuned.go:550] Pod (openshift-operator-lifecycle-manager/packageserver-7d6cb958db-m59ft) labels changed node wide: true\nI0919 15:02:48.561568   80164 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 15:02:48.564143   80164 openshift-tuned.go:441] Getting recommended profile...\nI0919 15:02:48.703507   80164 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 15:03:03.391083   80164 openshift-tuned.go:550] Pod (openshift-machine-config-operator/etcd-quorum-guard-ddc4c66d8-5wxvn) labels changed node wide: true\nI0919 15:03:03.562265   80164 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 15:03:03.580075   80164 openshift-tuned.go:441] Getting recommended profile...\nI0919 15:03:04.099771   80164 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0919 15:03:10.685559   80164 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal) labels changed node wide: true\nI0919 15:03:13.561605   80164 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 15:03:13.564867   80164 openshift-tuned.go:441] Getting recommended profile...\nI0919 15:03:13.741408   80164 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Sep 19 15:05:12.302 E ns/openshift-dns pod/dns-default-5x5fh node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=dns-node-resolver container exited with code 255 (Error): 
Sep 19 15:05:12.302 E ns/openshift-dns pod/dns-default-5x5fh node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=dns container exited with code 255 (Error): .:5353\n2020-09-19T14:48:43.114Z [INFO] plugin/reload: Running configuration MD5 = acefebac40c697acb35b9e96ca3c7ec9\n2020-09-19T14:48:43.115Z [INFO] CoreDNS-1.5.2\n2020-09-19T14:48:43.115Z [INFO] linux/amd64, go1.12.9, \nCoreDNS-1.5.2\nlinux/amd64, go1.12.9, \nW0919 15:03:10.587381       1 reflector.go:289] github.com/coredns/coredns/plugin/kubernetes/controller.go:271: watch of *v1.Namespace ended with: too old resource version: 19577 (34982)\n[INFO] SIGTERM: Shutting down servers then terminating\n
Sep 19 15:05:13.102 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error):        1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 15:02:53.339112       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:02:53.339433       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 15:02:53.545112       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:02:53.545466       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Sep 19 15:05:13.102 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-apiserver-7 container exited with code 255 (Error): :107] OpenAPI AggregationController: Processing item v1.project.openshift.io\nI0919 15:03:30.318849       1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io\nI0919 15:03:37.437161       1 controlbuf.go:382] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 15:03:37.437223       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379 <nil>} {etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379 <nil>}]\nI0919 15:03:37.437258       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379 <nil>} {etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379 <nil>}]\nW0919 15:03:37.447373       1 clientconn.go:960] grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing\n2020/09/19 15:03:37 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/09/19 15:03:37 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/09/19 15:03:37 httputil: ReverseProxy read error during body copy: unexpected EOF\nI0919 15:03:37.519851       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379 <nil>}]\nW0919 15:03:37.520153       1 asm_amd64.s:1337] Failed to dial etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379: grpc: the connection is closing; please retry.\nI0919 15:03:37.802339       1 genericapiserver.go:546] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0919 15:03:37.802653       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Sep 19 15:05:13.102 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-apiserver-insecure-readyz-7 container exited with code 255 (Error): I0919 14:42:48.325250       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Sep 19 15:05:13.502 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:02:58.338487       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:02:58.338960       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:03.348107       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:03.348432       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:10.644614       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:10.646973       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:15.651040       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:15.651426       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:20.664615       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:20.665085       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:25.675131       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:25.675523       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:30.683813       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:30.684129       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:35.693525       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:35.693880       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Sep 19 15:05:13.502 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-controller-manager-6 container exited with code 255 (Error): e.openshift.io/v1beta1, Resource=machinesets", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machines": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machines", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheusrules": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheusrules", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=servicemonitors": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=servicemonitors", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=subscriptions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=subscriptions", couldn't start monitor for resource "metal3.io/v1alpha1, Resource=baremetalhosts": unable to monitor quota for resource "metal3.io/v1alpha1, Resource=baremetalhosts", couldn't start monitor for resource "operator.openshift.io/v1, Resource=ingresscontrollers": unable to monitor quota for resource "operator.openshift.io/v1, Resource=ingresscontrollers", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=podmonitors": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=podmonitors", couldn't start monitor for resource "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions": unable to monitor quota for resource "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions", couldn't start monitor for resource "healthchecking.openshift.io/v1alpha1, Resource=machinehealthchecks": unable to monitor quota for resource "healthchecking.openshift.io/v1alpha1, Resource=machinehealthchecks", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=catalogsources": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=catalogsources"]\nE0919 15:03:37.414988       1 controllermanager.go:287] leaderelection lost\nI0919 15:03:37.415030       1 serving.go:89] Shutting down DynamicLoader\n
Sep 19 15:05:13.901 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=scheduler container exited with code 255 (Error): hv-m-0.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nI0919 15:02:35.791033       1 scheduler.go:572] pod openshift-operator-lifecycle-manager/packageserver-74ff95665c-7k7wh is bound successfully on node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nI0919 15:02:36.710665       1 scheduler.go:572] pod openshift-operator-lifecycle-manager/packageserver-7f88999574-ptj46 is bound successfully on node ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nI0919 15:02:50.804075       1 scheduler.go:572] pod openshift-operator-lifecycle-manager/packageserver-7f88999574-nw7gb is bound successfully on node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nI0919 15:02:59.307509       1 scheduler.go:572] pod openshift-marketplace/certified-operators-54b4d98f67-kc2l5 is bound successfully on node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nI0919 15:02:59.440382       1 scheduler.go:572] pod openshift-marketplace/community-operators-85bc7f5687-b462x is bound successfully on node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nI0919 15:02:59.600855       1 scheduler.go:572] pod openshift-marketplace/redhat-operators-58957779fc-mf796 is bound successfully on node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nW0919 15:03:10.627654       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolume ended with: too old resource version: 19577 (34982)\nI0919 15:03:35.009279       1 scheduler.go:572] pod openshift-authentication/oauth-openshift-57d8cc8db6-w4cff is bound successfully on node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nF0919 15:03:37.421296       1 server.go:247] leaderelection lost\n
Sep 19 15:05:14.300 E ns/kube-system pod/gcp-routes-controller-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=gcp-routes-controller container exited with code 255 (Error): I0919 15:03:11.583183   95165 run.go:51] Version: machine-config-daemon-4.2.33-202006040618-2-g204f5642-dirty (204f5642bd3adbfa5c85e74958223fb0cf8ad2db)\nI0919 15:03:11.583458   95165 run.go:54] Calling chroot("/rootfs")\n2020/09/19 15:03:11 [DEBUG] Starting checker name=dependency-check\n2020/09/19 15:03:13 [ERROR] healthcheck has failed fatal=true err=Received status code '500' does not match expected status code '200' check=dependency-check\nI0919 15:03:21.595644   95165 run.go:164] Running OnSuccess trigger\n
Sep 19 15:05:14.702 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=etcd-member container exited with code 255 (Error): )\n2020-09-19 15:03:37.932126 W | rafthttp: lost the TCP streaming connection with peer 296bf7dcc71524af (stream Message reader)\n2020-09-19 15:03:37.932144 I | rafthttp: stopped streaming with peer 296bf7dcc71524af (stream Message reader)\n2020-09-19 15:03:37.932153 I | rafthttp: stopped peer 296bf7dcc71524af\n2020-09-19 15:03:37.932162 I | rafthttp: stopping peer 583891edb30d5854...\n2020-09-19 15:03:37.932476 I | rafthttp: closed the TCP streaming connection with peer 583891edb30d5854 (stream MsgApp v2 writer)\n2020-09-19 15:03:37.932509 I | rafthttp: stopped streaming with peer 583891edb30d5854 (writer)\n2020-09-19 15:03:37.932758 I | rafthttp: closed the TCP streaming connection with peer 583891edb30d5854 (stream Message writer)\n2020-09-19 15:03:37.932775 I | rafthttp: stopped streaming with peer 583891edb30d5854 (writer)\n2020-09-19 15:03:37.932864 I | rafthttp: stopped HTTP pipelining with peer 583891edb30d5854\n2020-09-19 15:03:37.932947 W | rafthttp: lost the TCP streaming connection with peer 583891edb30d5854 (stream MsgApp v2 reader)\n2020-09-19 15:03:37.932965 E | rafthttp: failed to read 583891edb30d5854 on stream MsgApp v2 (context canceled)\n2020-09-19 15:03:37.932972 I | rafthttp: peer 583891edb30d5854 became inactive (message send to peer failed)\n2020-09-19 15:03:37.932980 I | rafthttp: stopped streaming with peer 583891edb30d5854 (stream MsgApp v2 reader)\n2020-09-19 15:03:37.933073 W | rafthttp: lost the TCP streaming connection with peer 583891edb30d5854 (stream Message reader)\n2020-09-19 15:03:37.933106 I | rafthttp: stopped streaming with peer 583891edb30d5854 (stream Message reader)\n2020-09-19 15:03:37.933115 I | rafthttp: stopped peer 583891edb30d5854\n2020-09-19 15:03:37.938760 I | embed: rejected connection from "10.0.0.5:51240" (error "read tcp 10.0.0.2:2380->10.0.0.5:51240: use of closed network connection", ServerName "")\n2020-09-19 15:03:37.938795 I | embed: rejected connection from "10.0.0.5:51242" (error "read tcp 10.0.0.2:2380->10.0.0.5:51242: use of closed network connection", ServerName "")\n
Sep 19 15:05:14.702 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=etcd-metrics container exited with code 255 (Error): 2020-09-19 15:03:15.610670 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 15:03:15.614320 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 15:03:15.614719 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 15:03:15.634396 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Sep 19 15:05:17.902 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-controller-manager-cert-syncer-6 container exited with code 255 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:02:58.338487       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:02:58.338960       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:03.348107       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:03.348432       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:10.644614       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:10.646973       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:15.651040       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:15.651426       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:20.664615       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:20.665085       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:25.675131       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:25.675523       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:30.683813       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:30.684129       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0919 15:03:35.693525       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:03:35.693880       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Sep 19 15:05:17.902 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-controller-manager-6 container exited with code 255 (Error): e.openshift.io/v1beta1, Resource=machinesets", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machines": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machines", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheusrules": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheusrules", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=servicemonitors": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=servicemonitors", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=subscriptions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=subscriptions", couldn't start monitor for resource "metal3.io/v1alpha1, Resource=baremetalhosts": unable to monitor quota for resource "metal3.io/v1alpha1, Resource=baremetalhosts", couldn't start monitor for resource "operator.openshift.io/v1, Resource=ingresscontrollers": unable to monitor quota for resource "operator.openshift.io/v1, Resource=ingresscontrollers", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=podmonitors": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=podmonitors", couldn't start monitor for resource "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions": unable to monitor quota for resource "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions", couldn't start monitor for resource "healthchecking.openshift.io/v1alpha1, Resource=machinehealthchecks": unable to monitor quota for resource "healthchecking.openshift.io/v1alpha1, Resource=machinehealthchecks", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=catalogsources": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=catalogsources"]\nE0919 15:03:37.414988       1 controllermanager.go:287] leaderelection lost\nI0919 15:03:37.415030       1 serving.go:89] Shutting down DynamicLoader\n
Sep 19 15:05:18.703 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=etcd-member container exited with code 255 (Error): )\n2020-09-19 15:03:37.932126 W | rafthttp: lost the TCP streaming connection with peer 296bf7dcc71524af (stream Message reader)\n2020-09-19 15:03:37.932144 I | rafthttp: stopped streaming with peer 296bf7dcc71524af (stream Message reader)\n2020-09-19 15:03:37.932153 I | rafthttp: stopped peer 296bf7dcc71524af\n2020-09-19 15:03:37.932162 I | rafthttp: stopping peer 583891edb30d5854...\n2020-09-19 15:03:37.932476 I | rafthttp: closed the TCP streaming connection with peer 583891edb30d5854 (stream MsgApp v2 writer)\n2020-09-19 15:03:37.932509 I | rafthttp: stopped streaming with peer 583891edb30d5854 (writer)\n2020-09-19 15:03:37.932758 I | rafthttp: closed the TCP streaming connection with peer 583891edb30d5854 (stream Message writer)\n2020-09-19 15:03:37.932775 I | rafthttp: stopped streaming with peer 583891edb30d5854 (writer)\n2020-09-19 15:03:37.932864 I | rafthttp: stopped HTTP pipelining with peer 583891edb30d5854\n2020-09-19 15:03:37.932947 W | rafthttp: lost the TCP streaming connection with peer 583891edb30d5854 (stream MsgApp v2 reader)\n2020-09-19 15:03:37.932965 E | rafthttp: failed to read 583891edb30d5854 on stream MsgApp v2 (context canceled)\n2020-09-19 15:03:37.932972 I | rafthttp: peer 583891edb30d5854 became inactive (message send to peer failed)\n2020-09-19 15:03:37.932980 I | rafthttp: stopped streaming with peer 583891edb30d5854 (stream MsgApp v2 reader)\n2020-09-19 15:03:37.933073 W | rafthttp: lost the TCP streaming connection with peer 583891edb30d5854 (stream Message reader)\n2020-09-19 15:03:37.933106 I | rafthttp: stopped streaming with peer 583891edb30d5854 (stream Message reader)\n2020-09-19 15:03:37.933115 I | rafthttp: stopped peer 583891edb30d5854\n2020-09-19 15:03:37.938760 I | embed: rejected connection from "10.0.0.5:51240" (error "read tcp 10.0.0.2:2380->10.0.0.5:51240: use of closed network connection", ServerName "")\n2020-09-19 15:03:37.938795 I | embed: rejected connection from "10.0.0.5:51242" (error "read tcp 10.0.0.2:2380->10.0.0.5:51242: use of closed network connection", ServerName "")\n
Sep 19 15:05:18.703 E ns/openshift-etcd pod/etcd-member-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=etcd-metrics container exited with code 255 (Error): 2020-09-19 15:03:15.610670 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 15:03:15.614320 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-09-19 15:03:15.614719 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-09-19 15:03:15.634396 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Sep 19 15:05:19.110 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-apiserver-cert-syncer-7 container exited with code 255 (Error):        1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 15:02:53.339112       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:02:53.339433       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0919 15:02:53.545112       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0919 15:02:53.545466       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Sep 19 15:05:19.110 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-apiserver-7 container exited with code 255 (Error): :107] OpenAPI AggregationController: Processing item v1.project.openshift.io\nI0919 15:03:30.318849       1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io\nI0919 15:03:37.437161       1 controlbuf.go:382] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0919 15:03:37.437223       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379 <nil>} {etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379 <nil>}]\nI0919 15:03:37.437258       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379 <nil>} {etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379 <nil>}]\nW0919 15:03:37.447373       1 clientconn.go:960] grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing\n2020/09/19 15:03:37 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/09/19 15:03:37 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/09/19 15:03:37 httputil: ReverseProxy read error during body copy: unexpected EOF\nI0919 15:03:37.519851       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd-0.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379 <nil>}]\nW0919 15:03:37.520153       1 asm_amd64.s:1337] Failed to dial etcd-1.ci-op-55jkc77t-0675a.origin-ci-int-gce.dev.openshift.com:2379: grpc: the connection is closing; please retry.\nI0919 15:03:37.802339       1 genericapiserver.go:546] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0919 15:03:37.802653       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Sep 19 15:05:19.110 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=kube-apiserver-insecure-readyz-7 container exited with code 255 (Error): I0919 14:42:48.325250       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Sep 19 15:05:19.515 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=scheduler container exited with code 255 (Error): hv-m-0.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nI0919 15:02:35.791033       1 scheduler.go:572] pod openshift-operator-lifecycle-manager/packageserver-74ff95665c-7k7wh is bound successfully on node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nI0919 15:02:36.710665       1 scheduler.go:572] pod openshift-operator-lifecycle-manager/packageserver-7f88999574-ptj46 is bound successfully on node ci-op--b42hv-m-0.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nI0919 15:02:50.804075       1 scheduler.go:572] pod openshift-operator-lifecycle-manager/packageserver-7f88999574-nw7gb is bound successfully on node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nI0919 15:02:59.307509       1 scheduler.go:572] pod openshift-marketplace/certified-operators-54b4d98f67-kc2l5 is bound successfully on node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nI0919 15:02:59.440382       1 scheduler.go:572] pod openshift-marketplace/community-operators-85bc7f5687-b462x is bound successfully on node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nI0919 15:02:59.600855       1 scheduler.go:572] pod openshift-marketplace/redhat-operators-58957779fc-mf796 is bound successfully on node ci-op--b42hv-w-b-r5tvf.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nW0919 15:03:10.627654       1 reflector.go:293] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolume ended with: too old resource version: 19577 (34982)\nI0919 15:03:35.009279       1 scheduler.go:572] pod openshift-authentication/oauth-openshift-57d8cc8db6-w4cff is bound successfully on node ci-op--b42hv-m-1.c.openshift-gce-devel-ci.internal, 6 nodes evaluated, 2 nodes were found feasible\nF0919 15:03:37.421296       1 server.go:247] leaderelection lost\n
Sep 19 15:05:27.895 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 15:05:38.584 E clusteroperator/authentication changed Degraded to True: MultipleConditionsMatching: RouteHealthDegraded: failed to GET route: dial tcp 35.229.37.94:443: connect: no route to host\nWellKnownEndpointDegraded: failed to GET well-known https://10.0.0.2:6443/.well-known/oauth-authorization-server: dial tcp 10.0.0.2:6443: connect: connection refused
Sep 19 15:05:53.111 E ns/openshift-apiserver pod/apiserver-d28c6 node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=openshift-apiserver container exited with code 255 (Error):    1 client.go:352] parsed scheme: ""\nI0919 15:05:22.047229       1 client.go:352] scheme "" not registered, fallback to default scheme\nI0919 15:05:22.047273       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.openshift-etcd.svc:2379 0  <nil>}]\nI0919 15:05:22.047331       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 15:05:32.067210       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 15:05:32.067572       1 client.go:352] parsed scheme: ""\nI0919 15:05:32.067594       1 client.go:352] scheme "" not registered, fallback to default scheme\nI0919 15:05:32.067633       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{etcd.openshift-etcd.svc:2379 0  <nil>}]\nI0919 15:05:32.067686       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0919 15:05:52.067789       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []\nW0919 15:05:52.067849       1 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {etcd.openshift-etcd.svc:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: operation was canceled". Reconnecting...\nW0919 15:05:52.067901       1 asm_amd64.s:1337] Failed to dial etcd.openshift-etcd.svc:2379: grpc: the connection is closing; please retry.\nF0919 15:05:52.067908       1 storage_decorator.go:57] Unable to create storage backend: config (&{etcd3 openshift.io {[https://etcd.openshift-etcd.svc:2379] /var/run/secrets/etcd-client/tls.key /var/run/secrets/etcd-client/tls.crt /var/run/configmaps/etcd-serving-ca/ca-bundle.crt} false true {0xc000e2b950 0xc000e2b9e0} {{apps.openshift.io v1} [{apps.openshift.io } {apps.openshift.io }] false} <nil> 5m0s 1m0s}), err (context deadline exceeded)\nI0919 15:05:52.068478       1 controlbuf.go:382] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Sep 19 15:05:56.058 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-ddc4c66d8-pnjj7 node/ci-op--b42hv-m-2.c.openshift-gce-devel-ci.internal container=guard container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Sep 19 15:06:19.702 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b42hv-w-c-4fb9c.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): =info ts=2020-09-19T15:06:17.160Z caller=main.go:332 fd_limits="(soft=1048576, hard=1048576)"\nlevel=info ts=2020-09-19T15:06:17.160Z caller=main.go:333 vm_limits="(soft=unlimited, hard=unlimited)"\nlevel=info ts=2020-09-19T15:06:17.162Z caller=main.go:652 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T15:06:17.162Z caller=web.go:448 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T15:06:17.171Z caller=main.go:667 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T15:06:17.171Z caller=main.go:668 msg="TSDB started"\nlevel=info ts=2020-09-19T15:06:17.171Z caller=main.go:738 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T15:06:17.171Z caller=main.go:521 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T15:06:17.171Z caller=main.go:535 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T15:06:17.171Z caller=main.go:557 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T15:06:17.171Z caller=main.go:531 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T15:06:17.171Z caller=main.go:517 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T15:06:17.171Z caller=manager.go:776 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T15:06:17.172Z caller=manager.go:782 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T15:06:17.172Z caller=main.go:551 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T15:06:17.173Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T15:06:17.173Z caller=main.go:722 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19T15:06:17.173Z caller=main.go:731 err="error loading config from \"/etc/prometheus/config_out/prometheus.env.yaml\": couldn't load configuration (--config.file=\"/etc/prometheus/config_out/prometheus.env.yaml\"): open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 19 15:07:00.364 E ns/openshift-monitoring pod/node-exporter-hrbvp node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=node-exporter container exited with code 255 (Error): 20-09-19T14:43:43Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T14:43:43Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 15:07:00.364 E ns/openshift-monitoring pod/node-exporter-hrbvp node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 255 (Error): I0919 14:43:43.251133   47329 main.go:241] Reading certificate files\nI0919 14:43:43.251543   47329 main.go:274] Starting TCP socket on 10.0.32.4:9100\nI0919 14:43:43.251897   47329 main.go:281] Listening securely on 10.0.32.4:9100\nI0919 15:05:36.802190   47329 main.go:336] received interrupt, shutting down\nE0919 15:05:36.802590   47329 main.go:289] failed to gracefully close secure listener: close tcp 10.0.32.4:9100: use of closed network connection\n
Sep 19 15:07:00.797 E ns/openshift-image-registry pod/node-ca-vb4zg node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=node-ca container exited with code 255 (Error): 
Sep 19 15:07:01.236 E ns/openshift-sdn pod/sdn-f57b2 node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): oxy: processing 0 service events\nI0919 15:03:41.038428   64668 proxier.go:346] userspace syncProxyRules took 92.045293ms\nI0919 15:03:41.038438   64668 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 15:04:11.038715   64668 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 15:04:11.227468   64668 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 15:04:11.314888   64668 proxier.go:367] userspace proxy: processing 0 service events\nI0919 15:04:11.314921   64668 proxier.go:346] userspace syncProxyRules took 87.420551ms\nI0919 15:04:11.314933   64668 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 15:04:41.315202   64668 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 15:04:41.497954   64668 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 15:04:41.587198   64668 proxier.go:367] userspace proxy: processing 0 service events\nI0919 15:04:41.587230   64668 proxier.go:346] userspace syncProxyRules took 89.226095ms\nI0919 15:04:41.587242   64668 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0919 15:05:09.225775   64668 roundrobin.go:310] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.0.2:6443 10.0.0.3:6443 10.0.0.5:6443]\nI0919 15:05:09.225825   64668 roundrobin.go:240] Delete endpoint 10.0.0.2:6443 for service "default/kubernetes:https"\nI0919 15:05:09.225876   64668 proxy.go:331] hybrid proxy: syncProxyRules start\nI0919 15:05:09.453743   64668 proxy.go:334] hybrid proxy: mainProxy.syncProxyRules complete\nI0919 15:05:09.560285   64668 proxier.go:367] userspace proxy: processing 0 service events\nI0919 15:05:09.560316   64668 proxier.go:346] userspace syncProxyRules took 106.550064ms\nI0919 15:05:09.560329   64668 proxy.go:337] hybrid proxy: unidlingProxy.syncProxyRules complete\ninterrupt: Gracefully shutting down ...\nI0919 15:05:37.108185   64668 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\n
Sep 19 15:07:01.671 E ns/openshift-dns pod/dns-default-nb8jh node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=dns container exited with code 255 (Error): .:5353\n2020-09-19T14:48:35.345Z [INFO] plugin/reload: Running configuration MD5 = acefebac40c697acb35b9e96ca3c7ec9\n2020-09-19T14:48:35.346Z [INFO] CoreDNS-1.5.2\n2020-09-19T14:48:35.346Z [INFO] linux/amd64, go1.12.9, \nCoreDNS-1.5.2\nlinux/amd64, go1.12.9, \nW0919 15:03:10.587648       1 reflector.go:289] github.com/coredns/coredns/plugin/kubernetes/controller.go:271: watch of *v1.Namespace ended with: too old resource version: 19577 (34982)\nW0919 15:03:37.913080       1 reflector.go:289] github.com/coredns/coredns/plugin/kubernetes/controller.go:266: watch of *v1.Endpoints ended with: very short watch: github.com/coredns/coredns/plugin/kubernetes/controller.go:266: Unexpected watch close - watch lasted less than a second and no items received\nW0919 15:03:37.913259       1 reflector.go:289] github.com/coredns/coredns/plugin/kubernetes/controller.go:271: watch of *v1.Namespace ended with: very short watch: github.com/coredns/coredns/plugin/kubernetes/controller.go:271: Unexpected watch close - watch lasted less than a second and no items received\n[INFO] SIGTERM: Shutting down servers then terminating\n
Sep 19 15:07:01.671 E ns/openshift-dns pod/dns-default-nb8jh node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=dns-node-resolver container exited with code 255 (Error): /bin/bash: line 1: kill: (139) - No such process\n
Sep 19 15:07:02.109 E ns/openshift-sdn pod/ovs-dcxtc node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=openvswitch container exited with code 255 (Error): st 0 s (2 deletes)\n2020-09-19T15:04:32.308Z|00184|connmgr|INFO|br0<->unix#319: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T15:04:32.356Z|00185|bridge|INFO|bridge br0: deleted interface vethf4069c2f on port 16\n2020-09-19T15:04:32.412Z|00186|connmgr|INFO|br0<->unix#322: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T15:04:32.462Z|00187|connmgr|INFO|br0<->unix#325: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T15:04:32.498Z|00188|bridge|INFO|bridge br0: deleted interface veth956d869a on port 4\n2020-09-19T15:04:32.549Z|00189|connmgr|INFO|br0<->unix#328: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T15:04:32.592Z|00190|connmgr|INFO|br0<->unix#331: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T15:04:32.654Z|00191|bridge|INFO|bridge br0: deleted interface veth34d32d0c on port 14\n2020-09-19T15:04:37.275Z|00192|connmgr|INFO|br0<->unix#337: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T15:04:37.308Z|00193|connmgr|INFO|br0<->unix#340: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T15:04:37.342Z|00194|bridge|INFO|bridge br0: deleted interface veth86587a55 on port 19\n2020-09-19T15:05:00.480Z|00016|jsonrpc|WARN|Dropped 1 log messages in last 926 seconds (most recently, 926 seconds ago) due to excessive rate\n2020-09-19T15:05:00.480Z|00017|jsonrpc|WARN|unix#252: receive error: Connection reset by peer\n2020-09-19T15:05:00.480Z|00018|reconnect|WARN|unix#252: connection dropped (Connection reset by peer)\n2020-09-19T15:05:00.325Z|00195|connmgr|INFO|br0<->unix#343: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T15:05:00.358Z|00196|connmgr|INFO|br0<->unix#346: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T15:05:00.390Z|00197|bridge|INFO|bridge br0: deleted interface vethd3d55dd4 on port 6\n2020-09-19T15:05:00.430Z|00198|connmgr|INFO|br0<->unix#349: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-19T15:05:00.471Z|00199|connmgr|INFO|br0<->unix#352: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-19T15:05:00.503Z|00200|bridge|INFO|bridge br0: deleted interface veth4c571bc5 on port 17\nTerminated\n
Sep 19 15:07:02.543 E ns/openshift-multus pod/multus-j64j2 node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=kube-multus container exited with code 255 (Error): 
Sep 19 15:07:02.981 E ns/openshift-machine-config-operator pod/machine-config-daemon-4v6rb node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=machine-config-daemon container exited with code 255 (Error): 549ee6b0ce2560c145210d44072ede57d27571889f98e4\nI0919 15:05:02.281046  108802 run.go:16] Running: podman pull -q --authfile /var/lib/kubelet/config.json registry.svc.ci.openshift.org/ocp/4.2-2020-09-19-113420@sha256:d0bf79a4730992d8e2a47ed580c81b50841a2180db4e3f81ad8d2b91cf153eb4\n2020-09-19 15:05:02.533254821 +0000 UTC m=+0.122540938 system refresh\n2020-09-19 15:05:22.802259497 +0000 UTC m=+20.391545567 image pull  \ne205650b2a4292cc6189b48a52919ade9be095d6914aba1a3273eef4b4ddfb72\nI0919 15:05:22.810270  108802 rpm-ostree.go:357] Running captured: podman inspect --type=image registry.svc.ci.openshift.org/ocp/4.2-2020-09-19-113420@sha256:d0bf79a4730992d8e2a47ed580c81b50841a2180db4e3f81ad8d2b91cf153eb4\nI0919 15:05:22.924763  108802 rpm-ostree.go:357] Running captured: podman create --net=none --annotation=org.openshift.machineconfigoperator.pivot=true --name ostree-container-pivot-8a7ebed1-fa89-11ea-a55a-42010a002004 registry.svc.ci.openshift.org/ocp/4.2-2020-09-19-113420@sha256:d0bf79a4730992d8e2a47ed580c81b50841a2180db4e3f81ad8d2b91cf153eb4\nI0919 15:05:23.086245  108802 rpm-ostree.go:357] Running captured: podman mount 1820e379991d3868fa5a30ea4669254e16cd61c1eee5d414d633ace8c20b01ec\nI0919 15:05:23.205314  108802 rpm-ostree.go:238] Pivoting to: 42.81.20200919.0 (2e2d1f95863f994a79706245ce124c822940b7f6eec902b4ddf185ad36d7e601)\nclient(id:cli dbus:1.556 unit:machine-config-daemon-host.service uid:0) added; new total=1\nInitiated txn UpdateDeployment for client(id:cli dbus:1.556 unit:machine-config-daemon-host.service uid:0): /org/projectatomic/rpmostree1/rhcos\nsanitycheck(/usr/bin/true) successful\nTxn UpdateDeployment on /org/projectatomic/rpmostree1/rhcos successful\nclient(id:cli dbus:1.556 unit:machine-config-daemon-host.service uid:0) vanished; remaining=0\nIn idle state; will auto-exit in 61 seconds\nI0919 15:05:36.684718   73680 update.go:993] initiating reboot: Node will reboot into config rendered-worker-acfc632a5976a508e7fadb80fb0dc731\nI0919 15:05:36.831375   73680 daemon.go:505] Shutting down MachineConfigDaemon\n
Sep 19 15:07:03.381 E ns/openshift-cluster-node-tuning-operator pod/tuned-n54r2 node/ci-op--b42hv-w-d-trbgk.c.openshift-gce-devel-ci.internal container=tuned container exited with code 255 (Error): openshift-monitoring/telemeter-client-867bbb54f7-nvh85) labels changed node wide: true\nI0919 15:04:35.192414   91020 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 15:04:35.200299   91020 openshift-tuned.go:441] Getting recommended profile...\nI0919 15:04:35.342907   91020 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 15:04:35.530359   91020 openshift-tuned.go:550] Pod (openshift-marketplace/community-operators-74f789bf64-z4d98) labels changed node wide: true\nI0919 15:04:40.192271   91020 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 15:04:40.197856   91020 openshift-tuned.go:441] Getting recommended profile...\nI0919 15:04:40.339699   91020 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 15:04:40.340366   91020 openshift-tuned.go:550] Pod (openshift-image-registry/image-registry-86b4649bfd-49xjz) labels changed node wide: true\nI0919 15:04:45.192357   91020 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 15:04:45.195492   91020 openshift-tuned.go:441] Getting recommended profile...\nI0919 15:04:45.345583   91020 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0919 15:05:01.811654   91020 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-4403/foo-vs7qh) labels changed node wide: false\nI0919 15:05:01.832137   91020 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-4403/foo-qs24w) labels changed node wide: true\nI0919 15:05:05.192233   91020 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0919 15:05:05.195925   91020 openshift-tuned.go:441] Getting recommended profile...\nI0919 15:05:05.377387   91020 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n