ResultSUCCESS
Tests 1 failed / 15 succeeded
Started2019-04-29 05:52
Elapsed1h19m
Work namespaceci-op-bzjc847m
Refs
pod4.1.0-0.ci-2019-04-29-055100-upgrade
repo/
repos{u'/': u':'}

Test Failures


openshift-tests Monitor cluster while tests execute 46m45s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
248 error level events were detected during this test run:

Apr 29 06:25:42.043 E ns/openshift-sdn pod/sdn-6cldr node/ip-10-0-172-158.ec2.internal container=sdn container exited with code 1 (Error): tests-service-upgrade-pr4dh/service-test:" at 172.30.146.33:80/TCP\nI0429 06:22:39.414509    1296 roundrobin.go:310] LoadBalancerRR: Setting endpoints for e2e-tests-service-upgrade-pr4dh/service-test: to [10.128.2.13:80]\nI0429 06:22:40.155644    1296 roundrobin.go:310] LoadBalancerRR: Setting endpoints for e2e-tests-service-upgrade-pr4dh/service-test: to [10.128.2.13:80 10.129.2.14:80]\nI0429 06:22:40.155681    1296 roundrobin.go:240] Delete endpoint 10.129.2.14:80 for service "e2e-tests-service-upgrade-pr4dh/service-test:"\nI0429 06:24:52.542266    1296 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-cluster-version/cluster-version-operator:metrics\nI0429 06:25:00.931225    1296 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-cluster-version/cluster-version-operator:metrics to [10.0.148.116:9099]\nI0429 06:25:00.931261    1296 roundrobin.go:240] Delete endpoint 10.0.148.116:9099 for service "openshift-cluster-version/cluster-version-operator:metrics"\nI0429 06:25:26.115088    1296 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.157.63:9101 10.0.172.158:9101 10.0.173.108:9101]\nI0429 06:25:26.115151    1296 roundrobin.go:240] Delete endpoint 10.0.148.116:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:25:40.341077    1296 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.148.116:9101 10.0.157.63:9101 10.0.172.158:9101 10.0.173.108:9101]\nI0429 06:25:40.341115    1296 roundrobin.go:240] Delete endpoint 10.0.148.116:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:25:40.360455    1296 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.148.116:9101 10.0.157.63:9101 10.0.173.108:9101]\nI0429 06:25:40.360485    1296 roundrobin.go:240] Delete endpoint 10.0.172.158:9101 for service "openshift-sdn/sdn:metrics"\ninterrupt: Gracefully shutting down ...\n
Apr 29 06:25:45.788 E ns/openshift-multus pod/multus-2kr5s node/ip-10-0-135-108.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 29 06:25:54.124 E ns/openshift-sdn pod/sdn-controller-2wbj7 node/ip-10-0-148-116.ec2.internal container=sdn-controller container exited with code 137 (Error): ver is currently unable to handle the request\nI0429 06:13:39.747206       1 vnids.go:115] Allocated netid 8152615 for namespace "openshift"\nI0429 06:13:39.819692       1 vnids.go:115] Allocated netid 7523476 for namespace "openshift-node"\nI0429 06:13:50.908611       1 vnids.go:115] Allocated netid 14055021 for namespace "openshift-console"\nI0429 06:13:50.998382       1 vnids.go:115] Allocated netid 146520 for namespace "openshift-console-operator"\nI0429 06:15:02.599186       1 vnids.go:115] Allocated netid 12728243 for namespace "openshift-ingress"\nW0429 06:20:02.939517       1 reflector.go:256] k8s.io/client-go/informers/factory.go:132: watch of *v1.Namespace ended with: too old resource version: 10699 (14469)\nW0429 06:20:03.089844       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 6292 (15188)\nW0429 06:20:03.464857       1 reflector.go:256] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 10693 (15188)\nI0429 06:22:23.147534       1 vnids.go:115] Allocated netid 877469 for namespace "e2e-tests-sig-apps-job-upgrade-hgl8w"\nI0429 06:22:23.158449       1 vnids.go:115] Allocated netid 9578864 for namespace "e2e-tests-sig-apps-daemonset-upgrade-js6vk"\nI0429 06:22:23.166235       1 vnids.go:115] Allocated netid 4573859 for namespace "e2e-tests-sig-apps-replicaset-upgrade-d9s6w"\nI0429 06:22:23.185871       1 vnids.go:115] Allocated netid 14335526 for namespace "e2e-tests-sig-storage-sig-api-machinery-secret-upgrade-4zbcn"\nI0429 06:22:23.200715       1 vnids.go:115] Allocated netid 6916348 for namespace "e2e-tests-sig-storage-sig-api-machinery-configmap-upgrade-r4pxc"\nI0429 06:22:23.214168       1 vnids.go:115] Allocated netid 9371965 for namespace "e2e-tests-service-upgrade-pr4dh"\nI0429 06:22:23.227586       1 vnids.go:115] Allocated netid 14926750 for namespace "e2e-tests-sig-apps-deployment-upgrade-6zbsg"\n
Apr 29 06:25:54.665 E ns/openshift-sdn pod/sdn-zk2vl node/ip-10-0-157-63.ec2.internal container=sdn container exited with code 1 (Error): 29 06:25:00.931787    1344 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-cluster-version/cluster-version-operator:metrics to [10.0.148.116:9099]\nI0429 06:25:00.931822    1344 roundrobin.go:240] Delete endpoint 10.0.148.116:9099 for service "openshift-cluster-version/cluster-version-operator:metrics"\nI0429 06:25:26.115752    1344 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.157.63:9101 10.0.172.158:9101 10.0.173.108:9101]\nI0429 06:25:26.115791    1344 roundrobin.go:240] Delete endpoint 10.0.148.116:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:25:40.341908    1344 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.148.116:9101 10.0.157.63:9101 10.0.172.158:9101 10.0.173.108:9101]\nI0429 06:25:40.341951    1344 roundrobin.go:240] Delete endpoint 10.0.148.116:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:25:40.359251    1344 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.148.116:9101 10.0.157.63:9101 10.0.173.108:9101]\nI0429 06:25:40.359282    1344 roundrobin.go:240] Delete endpoint 10.0.172.158:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:25:53.486126    1344 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.148.116:9101 10.0.157.63:9101 10.0.172.158:9101 10.0.173.108:9101]\nI0429 06:25:53.486169    1344 roundrobin.go:240] Delete endpoint 10.0.172.158:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:25:53.504170    1344 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.148.116:9101 10.0.172.158:9101 10.0.173.108:9101]\nI0429 06:25:53.504201    1344 roundrobin.go:240] Delete endpoint 10.0.157.63:9101 for service "openshift-sdn/sdn:metrics"\ninterrupt: Gracefully shutting down ...\n
Apr 29 06:25:59.539 E ns/openshift-sdn pod/sdn-8lvv8 node/ip-10-0-131-108.ec2.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:25:58.101971    1289 healthcheck.go:87] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:25:58.132295    1289 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:25:58.232304    1289 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:25:58.332337    1289 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:25:58.432398    1289 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:25:58.532395    1289 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:25:58.632375    1289 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:25:58.732337    1289 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:25:58.832322    1289 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:25:58.932336    1289 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:25:59.038018    1289 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0429 06:25:59.038071    1289 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 29 06:26:14.514 E ns/openshift-sdn pod/sdn-nbwcq node/ip-10-0-173-108.ec2.internal container=sdn container exited with code 1 (Error): robin.go:240] Delete endpoint 10.0.157.63:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:25:59.538064    2434 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.135.108:9101 10.0.148.116:9101 10.0.172.158:9101 10.0.173.108:9101]\nI0429 06:25:59.538117    2434 roundrobin.go:240] Delete endpoint 10.0.131.108:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:26:02.990560    2434 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.135.108:9101 10.0.148.116:9101 10.0.157.63:9101 10.0.172.158:9101 10.0.173.108:9101]\nI0429 06:26:02.990603    2434 roundrobin.go:240] Delete endpoint 10.0.157.63:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:26:08.159824    2434 roundrobin.go:310] LoadBalancerRR: Setting endpoints for e2e-tests-service-upgrade-pr4dh/service-test: to [10.128.2.13:80]\nI0429 06:26:08.159893    2434 roundrobin.go:240] Delete endpoint 10.129.2.14:80 for service "e2e-tests-service-upgrade-pr4dh/service-test:"\nI0429 06:26:10.152189    2434 roundrobin.go:310] LoadBalancerRR: Setting endpoints for e2e-tests-service-upgrade-pr4dh/service-test: to [10.128.2.13:80 10.129.2.14:80]\nI0429 06:26:10.152223    2434 roundrobin.go:240] Delete endpoint 10.129.2.14:80 for service "e2e-tests-service-upgrade-pr4dh/service-test:"\nI0429 06:26:13.170070    2434 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.148.116:9101 10.0.157.63:9101 10.0.172.158:9101 10.0.173.108:9101]\nI0429 06:26:13.170124    2434 roundrobin.go:240] Delete endpoint 10.0.131.108:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:26:13.187630    2434 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.148.116:9101 10.0.157.63:9101 10.0.172.158:9101]\nI0429 06:26:13.187677    2434 roundrobin.go:240] Delete endpoint 10.0.173.108:9101 for service "openshift-sdn/sdn:metrics"\ninterrupt: Gracefully shutting down ...\n
Apr 29 06:26:22.740 E ns/openshift-multus pod/multus-nvm5j node/ip-10-0-157-63.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 29 06:26:30.238 E ns/openshift-sdn pod/ovs-kktrf node/ip-10-0-148-116.ec2.internal container=openvswitch container exited with code 137 (Error): :16.523Z|00236|connmgr|INFO|br0<->unix#525: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:22:16.558Z|00237|connmgr|INFO|br0<->unix#528: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:22:16.581Z|00238|bridge|INFO|bridge br0: deleted interface veth35dd68c9 on port 38\n2019-04-29T06:22:30.105Z|00239|bridge|INFO|bridge br0: added interface veth912745a9 on port 39\n2019-04-29T06:22:30.135Z|00240|connmgr|INFO|br0<->unix#534: 5 flow_mods in the last 0 s (5 adds)\n2019-04-29T06:22:30.179Z|00241|connmgr|INFO|br0<->unix#538: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:22:30.182Z|00242|connmgr|INFO|br0<->unix#540: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2019-04-29T06:25:33.232Z|00243|connmgr|INFO|br0<->unix#569: 2 flow_mods in the last 0 s (2 adds)\n2019-04-29T06:25:33.298Z|00244|connmgr|INFO|br0<->unix#575: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:25:33.328Z|00245|connmgr|INFO|br0<->unix#578: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:25:33.355Z|00246|connmgr|INFO|br0<->unix#581: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:25:33.502Z|00247|connmgr|INFO|br0<->unix#584: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:33.532Z|00248|connmgr|INFO|br0<->unix#587: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:25:33.557Z|00249|connmgr|INFO|br0<->unix#590: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:33.584Z|00250|connmgr|INFO|br0<->unix#593: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:25:33.611Z|00251|connmgr|INFO|br0<->unix#596: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:33.636Z|00252|connmgr|INFO|br0<->unix#599: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:25:33.659Z|00253|connmgr|INFO|br0<->unix#602: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:33.682Z|00254|connmgr|INFO|br0<->unix#605: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:25:33.705Z|00255|connmgr|INFO|br0<->unix#608: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:33.727Z|00256|connmgr|INFO|br0<->unix#611: 1 flow_mods in the last 0 s (1 adds)\n
Apr 29 06:26:34.939 E ns/openshift-sdn pod/sdn-controller-j2df4 node/ip-10-0-135-108.ec2.internal container=sdn-controller container exited with code 137 (Error): I0429 06:07:54.573565       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 29 06:26:35.045 E ns/openshift-sdn pod/sdn-nh4xv node/ip-10-0-135-108.ec2.internal container=sdn container exited with code 1 (Error): 57.63:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:26:08.158806    2323 roundrobin.go:310] LoadBalancerRR: Setting endpoints for e2e-tests-service-upgrade-pr4dh/service-test: to [10.128.2.13:80]\nI0429 06:26:08.158957    2323 roundrobin.go:240] Delete endpoint 10.129.2.14:80 for service "e2e-tests-service-upgrade-pr4dh/service-test:"\nI0429 06:26:10.151089    2323 roundrobin.go:310] LoadBalancerRR: Setting endpoints for e2e-tests-service-upgrade-pr4dh/service-test: to [10.128.2.13:80 10.129.2.14:80]\nI0429 06:26:10.151151    2323 roundrobin.go:240] Delete endpoint 10.129.2.14:80 for service "e2e-tests-service-upgrade-pr4dh/service-test:"\nI0429 06:26:13.169024    2323 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.148.116:9101 10.0.157.63:9101 10.0.172.158:9101 10.0.173.108:9101]\nI0429 06:26:13.169174    2323 roundrobin.go:240] Delete endpoint 10.0.131.108:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:26:13.188929    2323 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.148.116:9101 10.0.157.63:9101 10.0.172.158:9101]\nI0429 06:26:13.188994    2323 roundrobin.go:240] Delete endpoint 10.0.173.108:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:26:33.742646    2323 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.135.108:9101 10.0.148.116:9101 10.0.157.63:9101 10.0.172.158:9101 10.0.173.108:9101]\nI0429 06:26:33.742679    2323 roundrobin.go:240] Delete endpoint 10.0.173.108:9101 for service "openshift-sdn/sdn:metrics"\nI0429 06:26:33.762338    2323 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-sdn/sdn:metrics to [10.0.131.108:9101 10.0.148.116:9101 10.0.157.63:9101 10.0.172.158:9101 10.0.173.108:9101]\nI0429 06:26:33.762450    2323 roundrobin.go:240] Delete endpoint 10.0.135.108:9101 for service "openshift-sdn/sdn:metrics"\ninterrupt: Gracefully shutting down ...\n
Apr 29 06:26:41.272 E ns/openshift-sdn pod/sdn-ww4nx node/ip-10-0-148-116.ec2.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0429 06:26:39.328670   36917 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:26:39.428722   36917 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:26:39.528702   36917 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:26:39.628718   36917 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:26:39.728662   36917 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:26:39.828717   36917 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:26:39.928721   36917 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:26:40.028719   36917 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:26:40.128709   36917 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:26:40.228710   36917 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:26:40.228767   36917 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0429 06:26:40.228776   36917 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Apr 29 06:26:54.322 E ns/openshift-service-ca pod/service-serving-cert-signer-85f7d688f7-ml8cp node/ip-10-0-148-116.ec2.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Apr 29 06:26:54.344 E ns/openshift-service-ca pod/apiservice-cabundle-injector-7f56dfbd4b-qnwt8 node/ip-10-0-148-116.ec2.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Apr 29 06:27:14.229 E ns/openshift-sdn pod/ovs-nb9bt node/ip-10-0-172-158.ec2.internal container=openvswitch container exited with code 137 (Error): 88Z|00088|connmgr|INFO|br0<->unix#195: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:22:34.389Z|00089|connmgr|INFO|br0<->unix#196: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2019-04-29T06:22:38.624Z|00090|connmgr|INFO|br0<->unix#199: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:22:38.652Z|00091|connmgr|INFO|br0<->unix#202: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:22:38.674Z|00092|bridge|INFO|bridge br0: deleted interface vethde913c76 on port 12\n2019-04-29T06:22:48.937Z|00093|connmgr|INFO|br0<->unix#205: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:22:48.966Z|00094|connmgr|INFO|br0<->unix#208: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:22:48.989Z|00095|bridge|INFO|bridge br0: deleted interface veth69beea99 on port 13\n2019-04-29T06:25:44.993Z|00096|connmgr|INFO|br0<->unix#236: 2 flow_mods in the last 0 s (2 adds)\n2019-04-29T06:25:45.079Z|00097|connmgr|INFO|br0<->unix#242: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:25:45.117Z|00098|connmgr|INFO|br0<->unix#245: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:25:45.264Z|00099|connmgr|INFO|br0<->unix#248: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:45.305Z|00100|connmgr|INFO|br0<->unix#251: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:25:45.333Z|00101|connmgr|INFO|br0<->unix#254: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:45.363Z|00102|connmgr|INFO|br0<->unix#257: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:25:45.399Z|00103|connmgr|INFO|br0<->unix#260: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:45.434Z|00104|connmgr|INFO|br0<->unix#263: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:25:45.457Z|00105|connmgr|INFO|br0<->unix#266: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:45.486Z|00106|connmgr|INFO|br0<->unix#269: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:25:45.533Z|00107|connmgr|INFO|br0<->unix#272: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:45.557Z|00108|connmgr|INFO|br0<->unix#275: 1 flow_mods in the last 0 s (1 adds)\n
Apr 29 06:27:16.274 E ns/openshift-sdn pod/sdn-9jlpv node/ip-10-0-172-158.ec2.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:15.055210   16949 healthcheck.go:87] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:15.120616   16949 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:15.220563   16949 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:15.320590   16949 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:15.420584   16949 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:15.521275   16949 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:15.620613   16949 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:15.722278   16949 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:15.820642   16949 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:15.922253   16949 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:16.025755   16949 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0429 06:27:16.025805   16949 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 29 06:27:19.710 E ns/openshift-sdn pod/sdn-controller-2rzbh node/ip-10-0-173-108.ec2.internal container=sdn-controller container exited with code 137 (Error): I0429 06:07:54.985361       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 29 06:27:46.792 E ns/openshift-sdn pod/ovs-xl6j6 node/ip-10-0-173-108.ec2.internal container=openvswitch container exited with code 137 (Error): |00271|connmgr|INFO|br0<->unix#610: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2019-04-29T06:24:50.219Z|00272|connmgr|INFO|br0<->unix#614: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:24:50.251Z|00273|connmgr|INFO|br0<->unix#617: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:24:50.278Z|00274|bridge|INFO|bridge br0: deleted interface vethfa6492bd on port 44\n2019-04-29T06:26:28.943Z|00275|connmgr|INFO|br0<->unix#635: 2 flow_mods in the last 0 s (2 adds)\n2019-04-29T06:26:29.006Z|00276|connmgr|INFO|br0<->unix#641: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:26:29.030Z|00277|connmgr|INFO|br0<->unix#644: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:26:29.055Z|00278|connmgr|INFO|br0<->unix#647: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:26:29.082Z|00279|connmgr|INFO|br0<->unix#650: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:26:29.108Z|00280|connmgr|INFO|br0<->unix#653: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:26:29.132Z|00281|connmgr|INFO|br0<->unix#656: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:26:29.168Z|00282|connmgr|INFO|br0<->unix#659: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:26:29.203Z|00283|connmgr|INFO|br0<->unix#662: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:26:29.230Z|00284|connmgr|INFO|br0<->unix#665: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:26:29.258Z|00285|connmgr|INFO|br0<->unix#668: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:26:29.285Z|00286|connmgr|INFO|br0<->unix#671: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:26:29.315Z|00287|connmgr|INFO|br0<->unix#674: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:26:29.338Z|00288|connmgr|INFO|br0<->unix#677: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:26:29.363Z|00289|connmgr|INFO|br0<->unix#680: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:26:29.387Z|00290|connmgr|INFO|br0<->unix#683: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:26:29.410Z|00291|connmgr|INFO|br0<->unix#686: 1 flow_mods in the last 0 s (1 adds)\n
Apr 29 06:27:57.993 E ns/openshift-sdn pod/sdn-96s4p node/ip-10-0-173-108.ec2.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:55.886944   41114 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:55.986821   41114 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:56.086898   41114 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:56.186872   41114 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:56.286892   41114 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:56.386848   41114 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:56.486884   41114 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:56.586885   41114 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:56.686860   41114 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:56.786816   41114 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:27:56.786875   41114 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0429 06:27:56.786883   41114 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Apr 29 06:28:28.271 E ns/openshift-sdn pod/ovs-2t8p9 node/ip-10-0-135-108.ec2.internal container=openvswitch container exited with code 137 (Error): :25.733Z|00250|connmgr|INFO|br0<->unix#533: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:19:25.765Z|00251|connmgr|INFO|br0<->unix#536: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:19:25.790Z|00252|bridge|INFO|bridge br0: deleted interface vethd6db4f51 on port 43\n2019-04-29T06:22:30.120Z|00253|bridge|INFO|bridge br0: added interface veth3e3f46ee on port 44\n2019-04-29T06:22:30.154Z|00254|connmgr|INFO|br0<->unix#562: 5 flow_mods in the last 0 s (5 adds)\n2019-04-29T06:22:30.209Z|00255|connmgr|INFO|br0<->unix#566: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2019-04-29T06:22:30.212Z|00256|connmgr|INFO|br0<->unix#568: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:26:47.882Z|00257|connmgr|INFO|br0<->unix#603: 2 flow_mods in the last 0 s (2 adds)\n2019-04-29T06:26:47.959Z|00258|connmgr|INFO|br0<->unix#609: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:26:47.993Z|00259|connmgr|INFO|br0<->unix#612: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:26:48.026Z|00260|connmgr|INFO|br0<->unix#615: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:26:48.213Z|00261|connmgr|INFO|br0<->unix#618: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:26:48.246Z|00262|connmgr|INFO|br0<->unix#621: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:26:48.277Z|00263|connmgr|INFO|br0<->unix#624: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:26:48.320Z|00264|connmgr|INFO|br0<->unix#627: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:26:48.357Z|00265|connmgr|INFO|br0<->unix#630: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:26:48.396Z|00266|connmgr|INFO|br0<->unix#633: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:26:48.424Z|00267|connmgr|INFO|br0<->unix#636: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:26:48.452Z|00268|connmgr|INFO|br0<->unix#639: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:26:48.486Z|00269|connmgr|INFO|br0<->unix#642: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:26:48.521Z|00270|connmgr|INFO|br0<->unix#645: 1 flow_mods in the last 0 s (1 adds)\n
Apr 29 06:28:29.600 E ns/openshift-multus pod/multus-7wq77 node/ip-10-0-148-116.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 29 06:28:30.309 E ns/openshift-sdn pod/sdn-rl8sx node/ip-10-0-135-108.ec2.internal container=sdn container exited with code 255 (Error): ix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:28:29.059292   46850 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:28:29.159236   46850 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:28:29.259242   46850 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:28:29.359244   46850 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:28:29.459223   46850 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:28:29.559139   46850 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:28:29.659271   46850 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:28:29.759220   46850 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:28:29.859247   46850 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:28:29.959258   46850 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:28:30.063395   46850 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0429 06:28:30.063454   46850 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: OVS health check failed: plugin is not setup\n
Apr 29 06:28:34.475 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 29 06:28:42.334 E ns/openshift-machine-api pod/cluster-autoscaler-operator-7d6cc94d5d-hf9s4 node/ip-10-0-135-108.ec2.internal container=cluster-autoscaler-operator container exited with code 255 (Error): 
Apr 29 06:29:01.112 E ns/openshift-sdn pod/ovs-5g2m6 node/ip-10-0-157-63.ec2.internal container=openvswitch container exited with code 137 (Error): 48Z|00106|connmgr|INFO|br0<->unix#230: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:22:46.750Z|00107|connmgr|INFO|br0<->unix#232: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2019-04-29T06:22:46.817Z|00108|connmgr|INFO|br0<->unix#235: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:22:46.890Z|00109|connmgr|INFO|br0<->unix#238: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:22:46.923Z|00110|bridge|INFO|bridge br0: deleted interface veth1ee79578 on port 15\n2019-04-29T06:22:51.224Z|00111|connmgr|INFO|br0<->unix#244: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:22:51.261Z|00112|connmgr|INFO|br0<->unix#247: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:22:51.290Z|00113|bridge|INFO|bridge br0: deleted interface veth58d99072 on port 17\n2019-04-29T06:25:57.625Z|00114|connmgr|INFO|br0<->unix#276: 2 flow_mods in the last 0 s (2 adds)\n2019-04-29T06:25:57.714Z|00115|connmgr|INFO|br0<->unix#282: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:25:57.752Z|00116|connmgr|INFO|br0<->unix#285: 1 flow_mods in the last 0 s (1 deletes)\n2019-04-29T06:25:57.872Z|00117|connmgr|INFO|br0<->unix#288: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:57.909Z|00118|connmgr|INFO|br0<->unix#291: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:25:57.946Z|00119|connmgr|INFO|br0<->unix#294: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:57.976Z|00120|connmgr|INFO|br0<->unix#297: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:25:58.011Z|00121|connmgr|INFO|br0<->unix#300: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:58.041Z|00122|connmgr|INFO|br0<->unix#303: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:25:58.065Z|00123|connmgr|INFO|br0<->unix#306: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:58.092Z|00124|connmgr|INFO|br0<->unix#309: 1 flow_mods in the last 0 s (1 adds)\n2019-04-29T06:25:58.114Z|00125|connmgr|INFO|br0<->unix#312: 3 flow_mods in the last 0 s (3 adds)\n2019-04-29T06:25:58.137Z|00126|connmgr|INFO|br0<->unix#315: 1 flow_mods in the last 0 s (1 adds)\n
Apr 29 06:29:12.143 E ns/openshift-sdn pod/sdn-9r6c5 node/ip-10-0-157-63.ec2.internal container=sdn container exited with code 255 (Error): ar/run/openvswitch/db.sock: connect: connection refused\nI0429 06:29:10.201400   25625 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:29:10.301346   25625 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:29:10.401294   25625 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:29:10.501288   25625 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:29:10.601262   25625 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:29:10.701290   25625 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:29:10.801278   25625 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:29:10.901265   25625 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:29:11.001302   25625 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:29:11.101544   25625 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0429 06:29:11.101615   25625 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nF0429 06:29:11.101627   25625 healthcheck.go:78] SDN healthcheck detected unhealthy OVS server, restarting: timed out waiting for the condition\n
Apr 29 06:29:18.203 E ns/openshift-multus pod/multus-lxm47 node/ip-10-0-173-108.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 29 06:29:38.320 E ns/openshift-cluster-machine-approver pod/machine-approver-b65645448-f4m66 node/ip-10-0-173-108.ec2.internal container=machine-approver-controller container exited with code 2 (Error): 
Apr 29 06:29:50.390 E ns/openshift-service-ca-operator pod/service-ca-operator-b466c7f77-qhfl7 node/ip-10-0-173-108.ec2.internal container=operator container exited with code 2 (Error): 
Apr 29 06:31:07.148 E ns/openshift-service-ca pod/apiservice-cabundle-injector-7f56dfbd4b-qnwt8 node/ip-10-0-148-116.ec2.internal container=apiservice-cabundle-injector-controller container exited with code 2 (Error): 
Apr 29 06:31:07.162 E ns/openshift-service-ca pod/configmap-cabundle-injector-85b8f75fb6-kcfpz node/ip-10-0-148-116.ec2.internal container=configmap-cabundle-injector-controller container exited with code 2 (Error): 
Apr 29 06:31:07.504 E ns/openshift-service-ca pod/service-serving-cert-signer-85f7d688f7-ml8cp node/ip-10-0-148-116.ec2.internal container=service-serving-cert-signer-controller container exited with code 2 (Error): 
Apr 29 06:31:25.677 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-577d55f48c-76vmj node/ip-10-0-173-108.ec2.internal container=kube-apiserver-operator container exited with code 2 (Error): jectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"0f687510-6a45-11e9-9c32-126612525d08", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-5-ip-10-0-148-116.ec2.internal -n openshift-kube-apiserver because it was missing\nW0429 06:25:21.599341       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 16097 (17551)\nW0429 06:26:30.688788       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15188 (18202)\nW0429 06:28:16.216359       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15188 (18960)\nW0429 06:29:07.719509       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 14470 (16224)\nW0429 06:29:09.797769       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15211 (19312)\nI0429 06:30:04.025205       1 externalloadbalancer.go:23] syncing external loadbalancer hostnames: api.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com\nI0429 06:30:04.258815       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"0f687510-6a45-11e9-9c32-126612525d08", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LoglevelChange' Changed loglevel level to "2"\nI0429 06:30:10.787568       1 servicehostname.go:38] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local]\nW0429 06:30:43.606765       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 17795 (20095)\n
Apr 29 06:34:30.775 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-778db6cd56-62cq9 node/ip-10-0-173-108.ec2.internal container=kube-controller-manager-operator container exited with code 2 (Error):  reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 15180 (15204)\nW0429 06:21:46.666404       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 13548 (15204)\nW0429 06:21:46.739418       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 9766 (15847)\nI0429 06:22:34.956721       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"0f725b82-6a45-11e9-9c32-126612525d08", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LoglevelChange' Changed loglevel level to "2"\nW0429 06:27:50.336771       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15846 (18777)\nW0429 06:30:33.535939       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ServiceAccount ended with: too old resource version: 15864 (16238)\nW0429 06:30:44.937308       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15872 (20104)\nW0429 06:30:53.337818       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15872 (20173)\nI0429 06:32:34.952907       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"0f725b82-6a45-11e9-9c32-126612525d08", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LoglevelChange' Changed loglevel level to "2"\nW0429 06:34:03.938765       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 15865 (16031)\n
Apr 29 06:36:06.693 E ns/openshift-machine-config-operator pod/machine-config-operator-79c96d9d98-wz86v node/ip-10-0-135-108.ec2.internal container=machine-config-operator container exited with code 2 (Error): 
Apr 29 06:36:09.703 E ns/openshift-machine-api pod/machine-api-operator-5c5db88d9b-49kpf node/ip-10-0-135-108.ec2.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 29 06:38:13.846 E ns/openshift-machine-config-operator pod/machine-config-controller-56d945f488-x27w8 node/ip-10-0-135-108.ec2.internal container=machine-config-controller container exited with code 2 (Error): 
Apr 29 06:38:14.855 E ns/openshift-machine-config-operator pod/machine-config-server-9jf9w node/ip-10-0-135-108.ec2.internal container=machine-config-server container exited with code 2 (Error): 
Apr 29 06:38:17.105 E ns/openshift-machine-api pod/machine-api-controllers-6f9599c969-v2mc4 node/ip-10-0-135-108.ec2.internal container=controller-manager container exited with code 1 (Error): 
Apr 29 06:38:17.105 E ns/openshift-machine-api pod/machine-api-controllers-6f9599c969-v2mc4 node/ip-10-0-135-108.ec2.internal container=nodelink-controller container exited with code 2 (Error): 
Apr 29 06:38:27.585 E ns/openshift-machine-config-operator pod/machine-config-server-5klkp node/ip-10-0-148-116.ec2.internal container=machine-config-server container exited with code 2 (Error): 
Apr 29 06:39:04.475 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 29 06:39:07.950 E ns/openshift-operator-lifecycle-manager pod/packageserver-6d9564db76-x5spf node/ip-10-0-135-108.ec2.internal container=packageserver container exited with code 1 (Error): 
Apr 29 06:39:12.732 E ns/openshift-operator-lifecycle-manager pod/packageserver-6d9564db76-x5spf node/ip-10-0-135-108.ec2.internal container=packageserver container exited with code 1 (Error): 
Apr 29 06:39:49.475 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 29 06:39:57.148 E ns/openshift-operator-lifecycle-manager pod/catalog-operator-95d675754-2dlh2 node/ip-10-0-173-108.ec2.internal container=catalog-operator container exited with code 255 (Error): 
Apr 29 06:39:57.161 E ns/openshift-operator-lifecycle-manager pod/olm-operator-5859b45d8b-v7gch node/ip-10-0-173-108.ec2.internal container=olm-operator container exited with code 255 (Error): 
Apr 29 06:40:01.436 E ns/openshift-machine-config-operator pod/machine-config-daemon-tw96d node/ip-10-0-157-63.ec2.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 29 06:40:01.477 E ns/openshift-marketplace pod/certified-operators-6dbc497999-4l66j node/ip-10-0-157-63.ec2.internal container=certified-operators container exited with code 255 (Error): 
Apr 29 06:40:01.519 E ns/openshift-monitoring pod/node-exporter-986cb node/ip-10-0-157-63.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 29 06:40:01.519 E ns/openshift-monitoring pod/node-exporter-986cb node/ip-10-0-157-63.ec2.internal container=node-exporter container exited with code 255 (Error): 
Apr 29 06:40:02.063 E ns/openshift-marketplace pod/redhat-operators-8698555f86-krtck node/ip-10-0-157-63.ec2.internal container=redhat-operators container exited with code 255 (Error): 
Apr 29 06:40:06.281 E ns/openshift-cluster-node-tuning-operator pod/tuned-cwt4t node/ip-10-0-157-63.ec2.internal container=tuned container exited with code 255 (Error): 
Apr 29 06:40:06.679 E ns/openshift-ingress pod/router-default-7ff89986f6-q5nj5 node/ip-10-0-157-63.ec2.internal container=router container exited with code 255 (Error): ion; LastStreamID=163, ErrCode=NO_ERROR, debug=""\nE0429 06:37:59.042946       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=163, ErrCode=NO_ERROR, debug=""\nE0429 06:37:59.043164       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=163, ErrCode=NO_ERROR, debug=""\nW0429 06:37:59.447722       1 reflector.go:341] github.com/openshift/router/pkg/router/template/service_lookup.go:32: watch of *v1.Service ended with: too old resource version: 16576 (21796)\nI0429 06:38:07.917535       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:38:15.701471       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:38:20.711671       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:38:25.714620       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:38:54.128156       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:38:59.148755       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nE0429 06:38:59.997407       1 reflector.go:322] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: Failed to watch *v1.Endpoints: Get https://172.30.0.1:443/api/v1/endpoints?resourceVersion=24443&timeoutSeconds=341&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0429 06:39:04.110487       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 29 06:40:07.265 E ns/openshift-image-registry pod/node-ca-86k8z node/ip-10-0-157-63.ec2.internal container=node-ca container exited with code 255 (Error): 
Apr 29 06:40:07.673 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.ec2.internal container=prometheus container exited with code 255 (Error): 
Apr 29 06:40:07.673 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 29 06:40:07.673 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.ec2.internal container=prom-label-proxy container exited with code 255 (Error): 
Apr 29 06:40:07.673 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.ec2.internal container=rules-configmap-reloader container exited with code 255 (Error): 
Apr 29 06:40:07.673 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.ec2.internal container=prometheus-proxy container exited with code 255 (Error): 
Apr 29 06:40:07.673 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.ec2.internal container=prometheus-config-reloader container exited with code 255 (Error): 
Apr 29 06:40:07.929 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-85dfbb5fc-mn5v2 node/ip-10-0-173-108.ec2.internal container=guard container exited with code 255 (Error): 
Apr 29 06:40:09.069 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-157-63.ec2.internal container=alertmanager container exited with code 255 (Error): 
Apr 29 06:40:09.069 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-157-63.ec2.internal container=alertmanager-proxy container exited with code 255 (Error): 
Apr 29 06:40:09.069 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-157-63.ec2.internal container=config-reloader container exited with code 255 (Error): 
Apr 29 06:40:09.663 E ns/openshift-monitoring pod/prometheus-adapter-55fdcd6bcd-4ggfn node/ip-10-0-157-63.ec2.internal container=prometheus-adapter container exited with code 255 (Error): 
Apr 29 06:40:09.727 E ns/openshift-image-registry pod/node-ca-nhhd5 node/ip-10-0-173-108.ec2.internal container=node-ca container exited with code 255 (Error): 
Apr 29 06:40:11.270 E ns/openshift-sdn pod/sdn-9r6c5 node/ip-10-0-157-63.ec2.internal container=sdn container exited with code 255 (Error): hift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0429 06:38:58.834634   31367 service.go:344] Removing service port "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0429 06:38:58.852644   31367 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:\nI0429 06:38:58.885052   31367 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.128.0.13:5443]\nI0429 06:38:58.885085   31367 roundrobin.go:240] Delete endpoint 10.128.0.13:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0429 06:38:58.885658   31367 service.go:319] Adding new service port "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:" at 172.30.175.183:443/TCP\nI0429 06:38:59.046620   31367 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics:https-metrics\nI0429 06:38:59.444126   31367 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-operator-lifecycle-manager/catalog-operator-metrics:https-metrics\nE0429 06:39:04.651471   31367 healthcheck.go:57] SDN healthcheck disconnected from OVS server: <nil>\nI0429 06:39:04.651586   31367 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0429 06:39:04.751871   31367 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0429 06:39:04.852049   31367 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0429 06:39:04.953277   31367 healthcheck.go:62] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\ninterrupt: Gracefully shutting down ...\n
Apr 29 06:40:11.728 E ns/openshift-monitoring pod/node-exporter-5mmsn node/ip-10-0-173-108.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 29 06:40:11.728 E ns/openshift-monitoring pod/node-exporter-5mmsn node/ip-10-0-173-108.ec2.internal container=node-exporter container exited with code 255 (Error): 
Apr 29 06:40:11.864 E ns/openshift-dns pod/dns-default-f5429 node/ip-10-0-157-63.ec2.internal container=dns container exited with code 255 (Error): 
Apr 29 06:40:11.864 E ns/openshift-dns pod/dns-default-f5429 node/ip-10-0-157-63.ec2.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 29 06:40:13.329 E ns/openshift-dns pod/dns-default-klp46 node/ip-10-0-173-108.ec2.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 29 06:40:13.329 E ns/openshift-dns pod/dns-default-klp46 node/ip-10-0-173-108.ec2.internal container=dns container exited with code 255 (Error): 
Apr 29 06:40:14.928 E ns/openshift-cluster-node-tuning-operator pod/tuned-wjckt node/ip-10-0-173-108.ec2.internal container=tuned container exited with code 255 (Error): 
Apr 29 06:40:21.131 E ns/openshift-multus pod/multus-2rzzw node/ip-10-0-173-108.ec2.internal container=kube-multus container exited with code 255 (Error): 
Apr 29 06:40:22.264 E ns/openshift-machine-config-operator pod/machine-config-daemon-tw96d node/ip-10-0-157-63.ec2.internal container=machine-config-daemon container exited with code 143 (Error): 
Apr 29 06:40:24.927 E ns/openshift-controller-manager pod/controller-manager-qt2t5 node/ip-10-0-173-108.ec2.internal container=controller-manager container exited with code 255 (Error): 
Apr 29 06:40:25.928 E ns/openshift-sdn pod/sdn-controller-jh557 node/ip-10-0-173-108.ec2.internal container=sdn-controller container exited with code 255 (Error): I0429 06:27:22.423089       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 29 06:40:27.728 E ns/openshift-machine-config-operator pod/machine-config-server-h8mws node/ip-10-0-173-108.ec2.internal container=machine-config-server container exited with code 255 (Error): 
Apr 29 06:40:28.728 E ns/openshift-apiserver pod/apiserver-zrg75 node/ip-10-0-173-108.ec2.internal container=openshift-apiserver container exited with code 255 (Error): "transport is closing"\nI0429 06:38:59.540438       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:38:59.540455       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:38:59.540624       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0429 06:38:59.540749       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:38:59.540810       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nW0429 06:38:59.555936       1 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {etcd.openshift-etcd.svc:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 172.30.115.42:2379: connect: connection refused". Reconnecting...\nW0429 06:38:59.556102       1 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {etcd.openshift-etcd.svc:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 172.30.115.42:2379: connect: connection refused". Reconnecting...\nI0429 06:38:59.685664       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/client-ca/ca-bundle.crt\nI0429 06:38:59.685711       1 clusterquotamapping.go:145] Shutting down ClusterQuotaMappingController controller\nI0429 06:38:59.685738       1 clientca.go:69] Shutting down DynamicCA: /var/run/configmaps/aggregator-client-ca/ca-bundle.crt\nI0429 06:38:59.685741       1 serving.go:88] Shutting down DynamicLoader\nI0429 06:38:59.685750       1 controller.go:87] Shutting down OpenAPI AggregationController\nE0429 06:38:59.686079       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\nI0429 06:38:59.686944       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 29 06:40:35.902 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-131-108.ec2.internal container=prometheus container exited with code 1 (Error): 
Apr 29 06:40:40.502 E ns/openshift-machine-config-operator pod/machine-config-daemon-k2z59 node/ip-10-0-172-158.ec2.internal container=machine-config-daemon container exited with code 143 (Error): 
Apr 29 06:40:44.059 E ns/openshift-machine-config-operator pod/machine-config-daemon-8xmxf node/ip-10-0-148-116.ec2.internal container=machine-config-daemon container exited with code 143 (Error): 
Apr 29 06:40:46.730 E ns/openshift-etcd pod/etcd-member-ip-10-0-173-108.ec2.internal node/ip-10-0-173-108.ec2.internal container=etcd-member container exited with code 255 (Error):  version from 3.0 to 3.3\n2019-04-29 06:38:23.010280 I | etcdserver/api: enabled capabilities for version 3.3\n2019-04-29 06:38:23.014967 I | rafthttp: started streaming with peer ba63a4fffcc9458d (stream Message reader)\n2019-04-29 06:38:23.015266 I | rafthttp: started streaming with peer ba63a4fffcc9458d (stream MsgApp v2 reader)\n2019-04-29 06:38:23.025678 I | rafthttp: peer ba63a4fffcc9458d became active\n2019-04-29 06:38:23.025837 I | rafthttp: established a TCP streaming connection with peer ba63a4fffcc9458d (stream MsgApp v2 writer)\n2019-04-29 06:38:23.035126 I | rafthttp: established a TCP streaming connection with peer 6a4e06c0e09bcdc4 (stream Message reader)\n2019-04-29 06:38:23.040114 I | rafthttp: established a TCP streaming connection with peer ba63a4fffcc9458d (stream Message writer)\n2019-04-29 06:38:23.040232 I | rafthttp: established a TCP streaming connection with peer 6a4e06c0e09bcdc4 (stream MsgApp v2 reader)\n2019-04-29 06:38:23.044525 I | rafthttp: established a TCP streaming connection with peer ba63a4fffcc9458d (stream MsgApp v2 reader)\n2019-04-29 06:38:23.044730 I | rafthttp: established a TCP streaming connection with peer ba63a4fffcc9458d (stream Message reader)\n2019-04-29 06:38:23.054777 I | etcdserver: 617d0e8440ef7c62 initialzed peer connection; fast-forwarding 8 ticks (election ticks 10) with 2 active peer(s)\n2019-04-29 06:38:23.174582 I | embed: ready to serve client requests\n2019-04-29 06:38:23.176950 I | embed: serving client requests on [::]:2379\n2019-04-29 06:38:23.177447 I | etcdserver: published {Name:etcd-member-ip-10-0-173-108.ec2.internal ClientURLs:[https://10.0.173.108:2379]} to cluster 7258b06eb20735e\nWARNING: 2019/04/29 06:38:23 Failed to dial 0.0.0.0:2379: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate"; please retry.\n2019-04-29 06:38:23.190041 I | embed: rejected connection from "127.0.0.1:38448" (error "tls: failed to verify client's certificate: x509: certificate specifies an incompatible key usage", ServerName "")\n
Apr 29 06:40:46.730 E ns/openshift-etcd pod/etcd-member-ip-10-0-173-108.ec2.internal node/ip-10-0-173-108.ec2.internal container=etcd-metrics container exited with code 255 (Error): 2019-04-29 06:38:23.061078 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2019-04-29 06:38:23.062355 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2019-04-29 06:38:23.063215 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2019-04-29 06:38:23.093869 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2019/04/29 06:38:59 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.173.108:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n
Apr 29 06:40:48.131 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-173-108.ec2.internal node/ip-10-0-173-108.ec2.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0429 06:36:24.810665       1 certsync_controller.go:161] Starting CertSyncer\nI0429 06:36:24.810763       1 observer_polling.go:106] Starting file observer\n
Apr 29 06:40:48.131 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-173-108.ec2.internal node/ip-10-0-173-108.ec2.internal container=kube-controller-manager-5 container exited with code 255 (Error): roller.go:734] Service has been deleted openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com. Attempting to cleanup load balancer resources\nI0429 06:38:59.035592       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-operator-lifecycle-manager/olm-operator-5859b45d8b, need 1, creating 1\nI0429 06:38:59.047647       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"olm-operator-5859b45d8b", UID:"0fc1ffb2-6a45-11e9-9c32-126612525d08", APIVersion:"apps/v1", ResourceVersion:"4228", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: olm-operator-5859b45d8b-mktwq\nI0429 06:38:59.236123       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-service-ca-operator/service-ca-operator-7d555bb85f, need 1, creating 1\nI0429 06:38:59.250957       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-7d555bb85f", UID:"2b46030d-6a48-11e9-b503-0ab10c4ec9d4", APIVersion:"apps/v1", ResourceVersion:"19879", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: service-ca-operator-7d555bb85f-bvfxw\nI0429 06:38:59.436376       1 replica_set.go:477] Too few replicas for ReplicaSet openshift-operator-lifecycle-manager/catalog-operator-95d675754, need 1, creating 1\nI0429 06:38:59.446519       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"catalog-operator-95d675754", UID:"0f6210d5-6a45-11e9-9c32-126612525d08", APIVersion:"apps/v1", ResourceVersion:"4145", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: catalog-operator-95d675754-ggg9s\nI0429 06:38:59.476943       1 deployment_controller.go:484] Error syncing deployment openshift-operator-lifecycle-manager/catalog-operator: Operation cannot be fulfilled on deployments.apps "catalog-operator": the object has been modified; please apply your changes to the latest version and try again\n
Apr 29 06:40:48.728 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-173-108.ec2.internal node/ip-10-0-173-108.ec2.internal container=kube-apiserver-6 container exited with code 255 (Error): :107] OpenAPI AggregationController: Processing item v1.build.openshift.io\nI0429 06:38:48.653616       1 controller.go:107] OpenAPI AggregationController: Processing item v1.packages.operators.coreos.com\nI0429 06:38:50.280549       1 controller.go:107] OpenAPI AggregationController: Processing item v1.project.openshift.io\nI0429 06:38:51.762476       1 controller.go:107] OpenAPI AggregationController: Processing item v1.route.openshift.io\nI0429 06:38:53.344876       1 controller.go:107] OpenAPI AggregationController: Processing item v1.security.openshift.io\nI0429 06:38:54.056499       1 cacher.go:605] cacher (*core.Pod): 1 objects queued in incoming channel.\nI0429 06:38:54.829778       1 controller.go:107] OpenAPI AggregationController: Processing item v1.user.openshift.io\nI0429 06:38:56.329389       1 controller.go:107] OpenAPI AggregationController: Processing item v1.image.openshift.io\nI0429 06:38:57.829499       1 controller.go:107] OpenAPI AggregationController: Processing item v1.authorization.openshift.io\nE0429 06:38:58.842509       1 available_controller.go:400] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0429 06:38:58.881882       1 available_controller.go:400] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0429 06:38:58.891584       1 available_controller.go:400] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0429 06:38:59.430917       1 controller.go:107] OpenAPI AggregationController: Processing item v1.quota.openshift.io\n
Apr 29 06:40:48.728 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-173-108.ec2.internal node/ip-10-0-173-108.ec2.internal container=kube-apiserver-cert-syncer-6 container exited with code 255 (Error): I0429 06:34:31.359691       1 observer_polling.go:106] Starting file observer\nI0429 06:34:31.359756       1 certsync_controller.go:161] Starting CertSyncer\n
Apr 29 06:40:50.143 E ns/openshift-etcd pod/etcd-member-ip-10-0-173-108.ec2.internal node/ip-10-0-173-108.ec2.internal container=etcd-member container exited with code 255 (Error):  version from 3.0 to 3.3\n2019-04-29 06:38:23.010280 I | etcdserver/api: enabled capabilities for version 3.3\n2019-04-29 06:38:23.014967 I | rafthttp: started streaming with peer ba63a4fffcc9458d (stream Message reader)\n2019-04-29 06:38:23.015266 I | rafthttp: started streaming with peer ba63a4fffcc9458d (stream MsgApp v2 reader)\n2019-04-29 06:38:23.025678 I | rafthttp: peer ba63a4fffcc9458d became active\n2019-04-29 06:38:23.025837 I | rafthttp: established a TCP streaming connection with peer ba63a4fffcc9458d (stream MsgApp v2 writer)\n2019-04-29 06:38:23.035126 I | rafthttp: established a TCP streaming connection with peer 6a4e06c0e09bcdc4 (stream Message reader)\n2019-04-29 06:38:23.040114 I | rafthttp: established a TCP streaming connection with peer ba63a4fffcc9458d (stream Message writer)\n2019-04-29 06:38:23.040232 I | rafthttp: established a TCP streaming connection with peer 6a4e06c0e09bcdc4 (stream MsgApp v2 reader)\n2019-04-29 06:38:23.044525 I | rafthttp: established a TCP streaming connection with peer ba63a4fffcc9458d (stream MsgApp v2 reader)\n2019-04-29 06:38:23.044730 I | rafthttp: established a TCP streaming connection with peer ba63a4fffcc9458d (stream Message reader)\n2019-04-29 06:38:23.054777 I | etcdserver: 617d0e8440ef7c62 initialzed peer connection; fast-forwarding 8 ticks (election ticks 10) with 2 active peer(s)\n2019-04-29 06:38:23.174582 I | embed: ready to serve client requests\n2019-04-29 06:38:23.176950 I | embed: serving client requests on [::]:2379\n2019-04-29 06:38:23.177447 I | etcdserver: published {Name:etcd-member-ip-10-0-173-108.ec2.internal ClientURLs:[https://10.0.173.108:2379]} to cluster 7258b06eb20735e\nWARNING: 2019/04/29 06:38:23 Failed to dial 0.0.0.0:2379: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate"; please retry.\n2019-04-29 06:38:23.190041 I | embed: rejected connection from "127.0.0.1:38448" (error "tls: failed to verify client's certificate: x509: certificate specifies an incompatible key usage", ServerName "")\n
Apr 29 06:40:50.143 E ns/openshift-etcd pod/etcd-member-ip-10-0-173-108.ec2.internal node/ip-10-0-173-108.ec2.internal container=etcd-metrics container exited with code 255 (Error): 2019-04-29 06:38:23.061078 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2019-04-29 06:38:23.062355 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2019-04-29 06:38:23.063215 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2019-04-29 06:38:23.093869 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2019/04/29 06:38:59 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.173.108:9978: connect: connection refused"; Reconnecting to {etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n
Apr 29 06:40:50.529 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-173-108.ec2.internal node/ip-10-0-173-108.ec2.internal container=scheduler container exited with code 255 (Error): h fit predicates 'map[MaxAzureDiskVolumeCount:{} CheckNodeUnschedulable:{} CheckVolumeBinding:{} NoVolumeZoneConflict:{} MatchInterPodAffinity:{} GeneralPredicates:{} PodToleratesNodeTaints:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} MaxCSIVolumeCountPred:{} NoDiskConflict:{}]' and priority functions 'map[BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{}]'\nW0429 06:36:56.218778       1 authorization.go:47] Authorization is disabled\nW0429 06:36:56.218794       1 authentication.go:55] Authentication is disabled\nI0429 06:36:56.218803       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251\nI0429 06:36:56.219944       1 serving.go:195] [0] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1556518157" (2019-04-29 06:09:35 +0000 UTC to 2021-04-28 06:09:36 +0000 UTC (now=2019-04-29 06:36:56.219927969 +0000 UTC))\nI0429 06:36:56.220007       1 serving.go:195] [1] "/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1556518157" [] issuer="<self>" (2019-04-29 06:09:17 +0000 UTC to 2020-04-28 06:09:18 +0000 UTC (now=2019-04-29 06:36:56.219962711 +0000 UTC))\nI0429 06:36:56.220043       1 secure_serving.go:136] Serving securely on [::]:10259\nI0429 06:36:56.220076       1 serving.go:77] Starting DynamicLoader\nI0429 06:36:57.121953       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller\nI0429 06:36:57.222266       1 controller_utils.go:1034] Caches are synced for scheduler controller\nI0429 06:36:57.222314       1 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...\n
Apr 29 06:40:53.975 E ns/openshift-machine-config-operator pod/machine-config-daemon-rk6nb node/ip-10-0-131-108.ec2.internal container=machine-config-daemon container exited with code 143 (Error): 
Apr 29 06:41:06.269 E clusteroperator/monitoring changed Degraded to True: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io prometheus-k8s)
Apr 29 06:41:08.442 E ns/openshift-operator-lifecycle-manager pod/packageserver-6d9564db76-x5spf node/ip-10-0-135-108.ec2.internal container=packageserver container exited with code 137 (Error): 
Apr 29 06:41:13.473 E kube-apiserver Kube API started failing: Get https://api.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 29 06:41:19.475 - 90s   E kube-apiserver Kube API is not responding to GET requests
Apr 29 06:42:50.123 E kube-apiserver failed contacting the API: Get https://api.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&resourceVersion=24957&timeout=9m49s&timeoutSeconds=589&watch=true: dial tcp 3.208.82.127:6443: connect: connection refused
Apr 29 06:42:50.123 E kube-apiserver failed contacting the API: Get https://api.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?resourceVersion=26315&timeout=8m6s&timeoutSeconds=486&watch=true: dial tcp 3.208.82.127:6443: connect: connection refused
Apr 29 06:42:50.123 E kube-apiserver failed contacting the API: Get https://api.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?resourceVersion=26332&timeout=7m15s&timeoutSeconds=435&watch=true: dial tcp 3.208.82.127:6443: connect: connection refused
Apr 29 06:42:58.571 E ns/openshift-authentication-operator pod/authentication-operator-6d5b8cf694-vxp54 node/ip-10-0-135-108.ec2.internal container=operator container exited with code 255 (Error): hift.io)\nE0429 06:40:20.241183       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io integrated-oauth-server)\nI0429 06:40:20.241562       1 status_controller.go:160] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-04-29T06:14:58Z","message":"Degraded: failed handling the route: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io integrated-oauth-server)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2019-04-29T06:21:49Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2019-04-29T06:21:49Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2019-04-29T06:14:58Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0429 06:40:20.247867       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"0fda3388-6a45-11e9-9c32-126612525d08", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for operator authentication changed: Degraded message changed from "Degraded: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout" to "Degraded: failed handling the route: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io integrated-oauth-server)"\nI0429 06:41:45.841130       1 leaderelection.go:249] failed to renew lease openshift-authentication-operator/cluster-authentication-operator-lock: failed to tryAcquireOrRenew context deadline exceeded\nF0429 06:41:45.841256       1 leaderelection.go:65] leaderelection lost\n
Apr 29 06:42:58.863 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-56dflfq8s node/ip-10-0-135-108.ec2.internal container=operator container exited with code 255 (Error): 
Apr 29 06:43:15.996 E clusteroperator/marketplace changed Degraded to True: Operator exited
Apr 29 06:43:21.473 E kube-apiserver Kube API started failing: Get https://api.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 29 06:43:34.475 - 45s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 29 06:43:34.475 - 105s  E kube-apiserver Kube API is not responding to GET requests
Apr 29 06:43:53.626 E kube-apiserver failed contacting the API: Get https://api.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?resourceVersion=27300&timeout=5m49s&timeoutSeconds=349&watch=true: net/http: TLS handshake timeout
Apr 29 06:43:54.638 E kube-apiserver failed contacting the API: Get https://api.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&resourceVersion=27298&timeout=6m51s&timeoutSeconds=411&watch=true: net/http: TLS handshake timeout
Apr 29 06:44:04.656 E kube-apiserver failed contacting the API: Get https://api.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?resourceVersion=27202&timeout=6m10s&timeoutSeconds=370&watch=true: dial tcp 3.208.82.127:6443: i/o timeout
Apr 29 06:45:19.196 E kube-apiserver failed contacting the API: Get https://api.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?resourceVersion=27202&timeout=8m4s&timeoutSeconds=484&watch=true: dial tcp 3.209.172.164:6443: connect: connection refused
Apr 29 06:45:25.215 E kube-apiserver failed contacting the API: the server could not find the requested resource (get clusterversions.config.openshift.io)
Apr 29 06:45:26.750 E clusteroperator/network changed Degraded to True: ApplyOperatorConfig: Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster: could not retrieve existing (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster: Get https://api-int.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-network-operator/configmaps/applied-cluster: http2: server sent GOAWAY and closed the connection; LastStreamID=29, ErrCode=NO_ERROR, debug=""
Apr 29 06:45:27.455 E ns/openshift-machine-config-operator pod/machine-config-daemon-f67wb node/ip-10-0-135-108.ec2.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 29 06:45:27.455 E ns/openshift-dns pod/dns-default-vbpm9 node/ip-10-0-172-158.ec2.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 29 06:45:27.455 E ns/openshift-dns pod/dns-default-vbpm9 node/ip-10-0-172-158.ec2.internal container=dns container exited with code 255 (Error): 
Apr 29 06:45:27.455 E ns/openshift-monitoring pod/node-exporter-5ml67 node/ip-10-0-172-158.ec2.internal container=node-exporter container exited with code 255 (Error): 
Apr 29 06:45:27.455 E ns/openshift-monitoring pod/node-exporter-5ml67 node/ip-10-0-172-158.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 29 06:45:27.456 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=kube-apiserver-6 container exited with code 255 (Error): s valid for localhost, etcd.kube-system.svc, etcd.kube-system.svc.cluster.local, etcd.openshift-etcd.svc, etcd.openshift-etcd.svc.cluster.local, etcd-1.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com, not etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com". Reconnecting...\nW0429 06:43:56.701187       1 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {etcd-1.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for localhost, etcd.kube-system.svc, etcd.kube-system.svc.cluster.local, etcd.openshift-etcd.svc, etcd.openshift-etcd.svc.cluster.local, etcd-1.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com, not etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com". Reconnecting...\nW0429 06:43:57.044511       1 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for localhost, etcd.kube-system.svc, etcd.kube-system.svc.cluster.local, etcd.openshift-etcd.svc, etcd.openshift-etcd.svc.cluster.local, etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com, not etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com". Reconnecting...\nF0429 06:44:00.047616       1 storage_decorator.go:57] Unable to create storage backend: config (&{etcd3 openshift.io [https://etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:2379 https://etcd-1.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:2379 https://etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:2379] /etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.key /etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.crt /etc/kubernetes/static-pod-resources/configmaps/etcd-serving-ca/ca-bundle.crt true 0xc001b5a7e0 <nil> 5m0s 1m0s}), err (context deadline exceeded)\n
Apr 29 06:45:27.456 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-108.ec2.internal node/ip-10-0-135-108.ec2.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0429 06:38:01.616739       1 observer_polling.go:106] Starting file observer\nI0429 06:38:01.617842       1 certsync_controller.go:161] Starting CertSyncer\n
Apr 29 06:45:27.456 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-108.ec2.internal node/ip-10-0-135-108.ec2.internal container=kube-controller-manager-5 container exited with code 255 (Error): gmaps/kube-controller-manager?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nE0429 06:42:49.546578       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0429 06:42:56.735744       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found]\nI0429 06:43:17.307702       1 serving.go:88] Shutting down DynamicLoader\nI0429 06:43:17.307886       1 secure_serving.go:180] Stopped listening on [::]:10257\nE0429 06:43:17.307665       1 controllermanager.go:282] leaderelection lost\n
Apr 29 06:45:27.456 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-135-108.ec2.internal node/ip-10-0-135-108.ec2.internal container=scheduler container exited with code 255 (Error): ck kube-system/kube-scheduler: Get https://localhost:6443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0429 06:42:56.838807       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-scheduler: endpoints "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "endpoints" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found]\nE0429 06:43:00.194009       1 factory.go:832] scheduler cache UpdatePod failed: pod 0681a725-6a4a-11e9-bfa5-1200e197abf0 is not added to scheduler cache, so cannot be updated\nE0429 06:43:00.194047       1 factory.go:923] scheduler cache RemovePod failed: pod 0681a725-6a4a-11e9-bfa5-1200e197abf0 is not found in scheduler cache, so cannot be removed from it\nE0429 06:43:17.310795       1 server.go:259] lost master\n
Apr 29 06:45:27.456 E ns/openshift-image-registry pod/node-ca-nhj28 node/ip-10-0-172-158.ec2.internal container=node-ca container exited with code 255 (Error): 
Apr 29 06:45:31.503 E ns/openshift-controller-manager pod/controller-manager-fbj8n node/ip-10-0-135-108.ec2.internal container=controller-manager container exited with code 255 (Error): 
Apr 29 06:45:31.630 E ns/openshift-monitoring pod/prometheus-adapter-55fdcd6bcd-62rd2 node/ip-10-0-172-158.ec2.internal container=prometheus-adapter container exited with code 255 (Error): 
Apr 29 06:45:32.110 E ns/openshift-sdn pod/sdn-rl8sx node/ip-10-0-135-108.ec2.internal container=sdn container exited with code 255 (Error): Removing endpoints for openshift-apiserver-operator/metrics:https\nI0429 06:43:12.501945   50098 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-authentication/integrated-oauth-server:https to [10.128.0.29:6443]\nI0429 06:43:12.501987   50098 roundrobin.go:240] Delete endpoint 10.129.0.33:6443 for service "openshift-authentication/integrated-oauth-server:https"\nI0429 06:43:12.971469   50098 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-service-catalog-controller-manager-operator/metrics:https\nI0429 06:43:16.117577   50098 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-operator-lifecycle-manager/catalog-operator-metrics:https-metrics\nI0429 06:43:16.338982   50098 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-operator-lifecycle-manager/olm-operators:grpc\nI0429 06:43:16.513646   50098 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.130.0.65:5443]\nI0429 06:43:16.513686   50098 roundrobin.go:240] Delete endpoint 10.129.0.58:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0429 06:43:16.861429   50098 service.go:344] Removing service port "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0429 06:43:16.890125   50098 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:\nI0429 06:43:17.005919   50098 roundrobin.go:276] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.130.0.65:5443]\nI0429 06:43:17.006047   50098 roundrobin.go:240] Delete endpoint 10.130.0.65:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0429 06:43:17.052926   50098 service.go:319] Adding new service port "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:" at 172.30.242.172:443/TCP\ninterrupt: Gracefully shutting down ...\n
Apr 29 06:45:32.401 E ns/openshift-machine-config-operator pod/machine-config-daemon-m5vgr node/ip-10-0-172-158.ec2.internal container=machine-config-daemon container exited with code 255 (Error): 
Apr 29 06:45:32.795 E ns/openshift-monitoring pod/kube-state-metrics-8665b66669-kzmjt node/ip-10-0-172-158.ec2.internal container=kube-state-metrics container exited with code 255 (Error): 
Apr 29 06:45:32.795 E ns/openshift-monitoring pod/kube-state-metrics-8665b66669-kzmjt node/ip-10-0-172-158.ec2.internal container=kube-rbac-proxy-self container exited with code 255 (Error): 
Apr 29 06:45:32.795 E ns/openshift-monitoring pod/kube-state-metrics-8665b66669-kzmjt node/ip-10-0-172-158.ec2.internal container=kube-rbac-proxy-main container exited with code 255 (Error): 
Apr 29 06:45:32.888 E ns/openshift-dns pod/dns-default-ckrkx node/ip-10-0-135-108.ec2.internal container=dns container exited with code 255 (Error): 
Apr 29 06:45:32.888 E ns/openshift-dns pod/dns-default-ckrkx node/ip-10-0-135-108.ec2.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 29 06:45:33.353 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-64c9b58b7-22jdm node/ip-10-0-173-108.ec2.internal container=operator container exited with code 255 (Error): 
Apr 29 06:45:33.597 E ns/openshift-monitoring pod/grafana-6d765cbddc-jwdjk node/ip-10-0-172-158.ec2.internal container=grafana container exited with code 255 (Error): 
Apr 29 06:45:33.597 E ns/openshift-monitoring pod/grafana-6d765cbddc-jwdjk node/ip-10-0-172-158.ec2.internal container=grafana-proxy container exited with code 255 (Error): 
Apr 29 06:45:34.597 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-172-158.ec2.internal container=prom-label-proxy container exited with code 255 (Error): 
Apr 29 06:45:34.597 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-172-158.ec2.internal container=prometheus container exited with code 255 (Error): 
Apr 29 06:45:34.597 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-172-158.ec2.internal container=prometheus-config-reloader container exited with code 255 (Error): 
Apr 29 06:45:34.597 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-172-158.ec2.internal container=rules-configmap-reloader container exited with code 255 (Error): 
Apr 29 06:45:34.597 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-172-158.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 29 06:45:34.597 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-172-158.ec2.internal container=prometheus-proxy container exited with code 255 (OOMKilled): 
Apr 29 06:45:35.172 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=kube-apiserver-6 container exited with code 255 (Error):  addrConn.createTransport failed to connect to {etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for localhost, etcd.kube-system.svc, etcd.kube-system.svc.cluster.local, etcd.openshift-etcd.svc, etcd.openshift-etcd.svc.cluster.local, etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com, not etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com". Reconnecting...\nW0429 06:44:37.165655       1 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.135.108:2379: connect: connection refused". Reconnecting...\nW0429 06:44:37.842692       1 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {etcd-1.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for localhost, etcd.kube-system.svc, etcd.kube-system.svc.cluster.local, etcd.openshift-etcd.svc, etcd.openshift-etcd.svc.cluster.local, etcd-1.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com, not etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com". Reconnecting...\nF0429 06:44:40.593919       1 storage_decorator.go:57] Unable to create storage backend: config (&{etcd3 openshift.io [https://etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:2379 https://etcd-1.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:2379 https://etcd-2.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:2379] /etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.key /etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.crt /etc/kubernetes/static-pod-resources/configmaps/etcd-serving-ca/ca-bundle.crt true 0xc001e53200 <nil> 5m0s 1m0s}), err (dial tcp 10.0.135.108:2379: connect: connection refused)\n
Apr 29 06:45:35.689 E ns/openshift-monitoring pod/node-exporter-tjhkc node/ip-10-0-135-108.ec2.internal container=node-exporter container exited with code 255 (Error): 
Apr 29 06:45:35.689 E ns/openshift-monitoring pod/node-exporter-tjhkc node/ip-10-0-135-108.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 29 06:45:35.801 E ns/openshift-sdn pod/sdn-9jlpv node/ip-10-0-172-158.ec2.internal container=sdn container exited with code 255 (Error): nal-default:metrics"\nI0429 06:43:09.263591   18692 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-ingress/router-internal-default:https to [10.129.2.5:443]\nI0429 06:43:09.263602   18692 roundrobin.go:240] Delete endpoint 10.128.2.17:443 for service "openshift-ingress/router-internal-default:https"\nI0429 06:43:09.263785   18692 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-ingress/router-default:https to [10.129.2.5:443]\nI0429 06:43:09.263801   18692 roundrobin.go:240] Delete endpoint 10.128.2.17:443 for service "openshift-ingress/router-default:https"\nI0429 06:43:09.263815   18692 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-ingress/router-default:http to [10.129.2.5:80]\nI0429 06:43:09.263825   18692 roundrobin.go:240] Delete endpoint 10.128.2.17:80 for service "openshift-ingress/router-default:http"\nI0429 06:43:09.280412   18692 roundrobin.go:310] LoadBalancerRR: Setting endpoints for e2e-tests-service-upgrade-pr4dh/service-test: to [10.129.2.14:80]\nI0429 06:43:09.280447   18692 roundrobin.go:240] Delete endpoint 10.128.2.13:80 for service "e2e-tests-service-upgrade-pr4dh/service-test:"\nI0429 06:43:09.280491   18692 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-monitoring/kube-state-metrics:https-self\nI0429 06:43:09.280503   18692 roundrobin.go:338] LoadBalancerRR: Removing endpoints for openshift-monitoring/kube-state-metrics:https-main\nI0429 06:43:09.600561   18692 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-etcd/etcd:etcd-metrics to [10.0.135.108:9979 10.0.148.116:9979 10.0.173.108:9979]\nI0429 06:43:09.600591   18692 roundrobin.go:240] Delete endpoint 10.0.135.108:9979 for service "openshift-etcd/etcd:etcd-metrics"\nI0429 06:43:09.600609   18692 roundrobin.go:310] LoadBalancerRR: Setting endpoints for openshift-etcd/etcd:etcd to [10.0.135.108:2379 10.0.148.116:2379 10.0.173.108:2379]\nI0429 06:43:09.600616   18692 roundrobin.go:240] Delete endpoint 10.0.135.108:2379 for service "openshift-etcd/etcd:etcd"\n
Apr 29 06:45:36.195 E ns/openshift-sdn pod/ovs-92knc node/ip-10-0-172-158.ec2.internal container=openvswitch container exited with code 255 (Error): 26.120Z|00115|memory|INFO|50136 kB peak resident set size after 10.1 seconds\n2019-04-29T06:27:26.120Z|00116|memory|INFO|handlers:1 ports:13 revalidators:1 rules:125 udpif keys:144\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2019-04-29T06:27:25.987Z|00011|memory|INFO|5864 kB peak resident set size after 10.0 seconds\n2019-04-29T06:27:25.987Z|00012|memory|INFO|cells:829 json-caches:1 monitors:2 sessions:2\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2019-04-29T06:30:23.750Z|00117|connmgr|INFO|br0<->unix#118: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:30:23.771Z|00118|bridge|INFO|bridge br0: deleted interface vethc6b5fb04 on port 11\n2019-04-29T06:30:39.085Z|00119|bridge|INFO|bridge br0: added interface vethd93e85e7 on port 13\n2019-04-29T06:30:39.121Z|00120|connmgr|INFO|br0<->unix#121: 5 flow_mods in the last 0 s (5 adds)\n2019-04-29T06:30:39.160Z|00121|connmgr|INFO|br0<->unix#124: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:40:20.313Z|00122|bridge|INFO|bridge br0: added interface veth43d30894 on port 14\n2019-04-29T06:40:20.365Z|00123|connmgr|INFO|br0<->unix#192: 5 flow_mods in the last 0 s (5 adds)\n2019-04-29T06:40:20.421Z|00124|connmgr|INFO|br0<->unix#196: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2019-04-29T06:40:20.433Z|00125|connmgr|INFO|br0<->unix#198: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:40:20.476Z|00126|bridge|INFO|bridge br0: added interface veth6f6a54d4 on port 15\n2019-04-29T06:40:20.509Z|00127|connmgr|INFO|br0<->unix#201: 5 flow_mods in the last 0 s (5 adds)\n2019-04-29T06:40:20.549Z|00128|connmgr|INFO|br0<->unix#204: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:40:20.673Z|00129|bridge|INFO|bridge br0: added interface veth0c39378b on port 16\n2019-04-29T06:40:20.709Z|00130|connmgr|INFO|br0<->unix#207: 5 flow_mods in the last 0 s (5 adds)\n2019-04-29T06:40:20.760Z|00131|connmgr|INFO|br0<->unix#211: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:40:20.763Z|00132|connmgr|INFO|br0<->unix#213: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\nTerminated\n
Apr 29 06:45:36.694 E ns/openshift-apiserver pod/apiserver-4qcpm node/ip-10-0-135-108.ec2.internal container=openshift-apiserver container exited with code 255 (Error): public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found]\nI0429 06:43:02.133941       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0429 06:43:02.134101       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:43:02.134145       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:43:02.134119       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0429 06:43:02.153318       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:43:02.788303       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0429 06:43:02.788478       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:43:02.788879       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0429 06:43:02.788989       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:43:02.810427       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nE0429 06:43:17.101246       1 watch.go:212] unable to encode watch object <nil>: expected pointer, but got invalid kind\n
Apr 29 06:45:36.795 E ns/openshift-cluster-node-tuning-operator pod/tuned-gbd28 node/ip-10-0-172-158.ec2.internal container=tuned container exited with code 255 (Error): 
Apr 29 06:45:37.196 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-172-158.ec2.internal container=config-reloader container exited with code 255 (Error): 
Apr 29 06:45:37.196 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-172-158.ec2.internal container=alertmanager container exited with code 255 (Error): 
Apr 29 06:45:37.196 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-172-158.ec2.internal container=alertmanager-proxy container exited with code 255 (Error): 
Apr 29 06:45:37.510 E ns/openshift-sdn pod/sdn-controller-npwbj node/ip-10-0-135-108.ec2.internal container=sdn-controller container exited with code 255 (Error): I0429 06:26:48.541960       1 leaderelection.go:205] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0429 06:42:10.401573       1 leaderelection.go:270] error retrieving resource lock openshift-sdn/openshift-network-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps openshift-network-controller)\nE0429 06:42:49.987229       1 leaderelection.go:270] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""\n
Apr 29 06:45:37.798 E ns/openshift-multus pod/multus-wr9fs node/ip-10-0-172-158.ec2.internal container=kube-multus container exited with code 255 (Error): 
Apr 29 06:45:37.902 E ns/openshift-machine-config-operator pod/machine-config-server-zmfgt node/ip-10-0-135-108.ec2.internal container=machine-config-server container exited with code 255 (Error): 
Apr 29 06:45:45.087 E ns/openshift-image-registry pod/node-ca-tkt4k node/ip-10-0-135-108.ec2.internal container=node-ca container exited with code 255 (Error): 
Apr 29 06:45:46.093 E ns/openshift-cluster-node-tuning-operator pod/tuned-gjx2q node/ip-10-0-135-108.ec2.internal container=tuned container exited with code 255 (Error): 
Apr 29 06:45:49.477 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 29 06:45:50.106 E ns/openshift-multus pod/multus-vmdxg node/ip-10-0-135-108.ec2.internal container=kube-multus container exited with code 255 (Error): 
Apr 29 06:45:50.490 E ns/openshift-sdn pod/ovs-8ktwx node/ip-10-0-135-108.ec2.internal container=openvswitch container exited with code 255 (Error): )\n2019-04-29T06:43:12.954Z|00266|bridge|INFO|bridge br0: deleted interface vethab6dc2f7 on port 17\n2019-04-29T06:43:13.044Z|00267|connmgr|INFO|br0<->unix#436: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:43:13.139Z|00268|connmgr|INFO|br0<->unix#439: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:43:13.196Z|00269|bridge|INFO|bridge br0: deleted interface vethec5eec29 on port 34\n2019-04-29T06:43:13.478Z|00270|connmgr|INFO|br0<->unix#442: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:43:13.519Z|00271|bridge|INFO|bridge br0: deleted interface vethe40c4b02 on port 18\n2019-04-29T06:43:13.729Z|00272|connmgr|INFO|br0<->unix#445: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:43:13.763Z|00273|bridge|INFO|bridge br0: deleted interface vethffd96443 on port 3\n2019-04-29T06:43:13.865Z|00274|connmgr|INFO|br0<->unix#448: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:43:13.932Z|00275|bridge|INFO|bridge br0: deleted interface veth884ecb13 on port 21\n2019-04-29T06:43:15.308Z|00276|connmgr|INFO|br0<->unix#451: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:43:15.355Z|00277|bridge|INFO|bridge br0: deleted interface vethb92d9b22 on port 5\n\n==> /var/log/openvswitch/ovsdb-server.log <==\n2019-04-29T06:43:15.256Z|00022|jsonrpc|WARN|unix#369: send error: Broken pipe\n2019-04-29T06:43:15.256Z|00023|reconnect|WARN|unix#369: connection dropped (Broken pipe)\n\n==> /var/log/openvswitch/ovs-vswitchd.log <==\n2019-04-29T06:43:15.779Z|00278|connmgr|INFO|br0<->unix#454: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:43:15.848Z|00279|connmgr|INFO|br0<->unix#457: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:43:15.925Z|00280|bridge|INFO|bridge br0: deleted interface vetha00ea8a8 on port 30\n2019-04-29T06:43:15.995Z|00281|connmgr|INFO|br0<->unix#460: 2 flow_mods in the last 0 s (2 deletes)\n2019-04-29T06:43:16.088Z|00282|connmgr|INFO|br0<->unix#463: 4 flow_mods in the last 0 s (4 deletes)\n2019-04-29T06:43:16.134Z|00283|bridge|INFO|bridge br0: deleted interface veth0b40a38c on port 33\nTerminated\n
Apr 29 06:46:00.821 E ns/openshift-monitoring pod/telemeter-client-66dc7947fd-qxbmz node/ip-10-0-131-108.ec2.internal container=telemeter-client container exited with code 2 (Error): 
Apr 29 06:46:00.821 E ns/openshift-monitoring pod/telemeter-client-66dc7947fd-qxbmz node/ip-10-0-131-108.ec2.internal container=reload container exited with code 2 (Error): 
Apr 29 06:46:09.873 E ns/openshift-marketplace pod/community-operators-f8b67869c-d57q2 node/ip-10-0-131-108.ec2.internal container=community-operators container exited with code 2 (Error): 
Apr 29 06:46:10.311 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-172-158.ec2.internal container=prometheus container exited with code 1 (Error): 
Apr 29 06:46:11.414 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.ec2.internal container=prometheus container exited with code 1 (Error): 
Apr 29 06:46:14.505 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-172-158.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): 
Apr 29 06:46:14.505 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-172-158.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 
Apr 29 06:46:14.505 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-172-158.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 
Apr 29 06:46:17.690 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-108.ec2.internal node/ip-10-0-135-108.ec2.internal container=kube-apiserver-cert-syncer-6 container exited with code 255 (Error): alhost:6443/api/v1/namespaces/$%7BPOD_NAMESPACE%7D/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0429 06:42:48.447803       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/$%7BPOD_NAMESPACE%7D/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0429 06:42:49.452305       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/$%7BPOD_NAMESPACE%7D/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0429 06:42:49.454800       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/$%7BPOD_NAMESPACE%7D/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0429 06:42:50.463252       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/$%7BPOD_NAMESPACE%7D/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0429 06:42:50.465054       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/$%7BPOD_NAMESPACE%7D/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0429 06:42:51.464284       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/$%7BPOD_NAMESPACE%7D/configmaps?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0429 06:42:51.465889       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Get https://localhost:6443/api/v1/namespaces/$%7BPOD_NAMESPACE%7D/secrets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\n
Apr 29 06:46:17.690 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-108.ec2.internal node/ip-10-0-135-108.ec2.internal container=kube-apiserver-6 container exited with code 255 (Error): g channel.\nI0429 06:43:10.935529       1 cacher.go:605] cacher (*apps.Deployment): 2 objects queued in incoming channel.\nI0429 06:43:10.936843       1 cacher.go:605] cacher (*apps.ReplicaSet): 1 objects queued in incoming channel.\nI0429 06:43:10.936894       1 cacher.go:605] cacher (*apps.ReplicaSet): 2 objects queued in incoming channel.\nI0429 06:43:10.937053       1 cacher.go:605] cacher (*apps.Deployment): 1 objects queued in incoming channel.\nI0429 06:43:10.937080       1 cacher.go:605] cacher (*apps.Deployment): 2 objects queued in incoming channel.\nI0429 06:43:12.076746       1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io\nI0429 06:43:12.708966       1 controller.go:608] quota admission added evaluator for: deployments.apps\nI0429 06:43:14.739521       1 controller.go:107] OpenAPI AggregationController: Processing item v1.template.openshift.io\nE0429 06:43:16.884737       1 available_controller.go:400] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0429 06:43:17.014037       1 available_controller.go:400] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0429 06:43:17.072163       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\nE0429 06:43:17.077211       1 available_controller.go:400] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0429 06:43:17.096945       1 log.go:172] suppressing panic for copyResponse error in test; copy error: context canceled\n
Apr 29 06:46:18.088 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-135-108.ec2.internal node/ip-10-0-135-108.ec2.internal container=scheduler container exited with code 255 (Error): ck kube-system/kube-scheduler: Get https://localhost:6443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0429 06:42:56.838807       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-scheduler: endpoints "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "endpoints" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found]\nE0429 06:43:00.194009       1 factory.go:832] scheduler cache UpdatePod failed: pod 0681a725-6a4a-11e9-bfa5-1200e197abf0 is not added to scheduler cache, so cannot be updated\nE0429 06:43:00.194047       1 factory.go:923] scheduler cache RemovePod failed: pod 0681a725-6a4a-11e9-bfa5-1200e197abf0 is not found in scheduler cache, so cannot be removed from it\nE0429 06:43:17.310795       1 server.go:259] lost master\n
Apr 29 06:46:18.494 E ns/openshift-etcd pod/etcd-member-ip-10-0-135-108.ec2.internal node/ip-10-0-135-108.ec2.internal container=etcd-metrics container exited with code 255 (Error): 2019-04-29 06:42:48.885792 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2019-04-29 06:42:48.887129 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2019-04-29 06:42:48.887962 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2019/04/29 06:42:48 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.135.108:9978: connect: connection refused"; Reconnecting to {etcd-0.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}\n2019-04-29 06:42:49.920826 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 29 06:46:18.494 E ns/openshift-etcd pod/etcd-member-ip-10-0-135-108.ec2.internal node/ip-10-0-135-108.ec2.internal container=etcd-member container exited with code 255 (Error): ssage reader)\n2019-04-29 06:42:48.944883 I | rafthttp: peer ba63a4fffcc9458d became active\n2019-04-29 06:42:48.944898 I | rafthttp: established a TCP streaming connection with peer ba63a4fffcc9458d (stream Message writer)\n2019-04-29 06:42:48.945037 I | raft: raft.node: 6a4e06c0e09bcdc4 elected leader ba63a4fffcc9458d at term 8\n2019-04-29 06:42:48.946152 I | rafthttp: established a TCP streaming connection with peer ba63a4fffcc9458d (stream MsgApp v2 writer)\n2019-04-29 06:42:48.946615 I | rafthttp: established a TCP streaming connection with peer 617d0e8440ef7c62 (stream MsgApp v2 writer)\n2019-04-29 06:42:48.967321 I | rafthttp: established a TCP streaming connection with peer 617d0e8440ef7c62 (stream MsgApp v2 reader)\n2019-04-29 06:42:48.967765 I | rafthttp: established a TCP streaming connection with peer ba63a4fffcc9458d (stream MsgApp v2 reader)\n2019-04-29 06:42:48.974542 I | rafthttp: established a TCP streaming connection with peer 617d0e8440ef7c62 (stream Message reader)\n2019-04-29 06:42:48.975374 I | rafthttp: established a TCP streaming connection with peer ba63a4fffcc9458d (stream Message reader)\n2019-04-29 06:42:48.990190 I | etcdserver: 6a4e06c0e09bcdc4 initialzed peer connection; fast-forwarding 8 ticks (election ticks 10) with 2 active peer(s)\n2019-04-29 06:42:49.044884 I | embed: ready to serve client requests\n2019-04-29 06:42:49.045443 I | etcdserver: published {Name:etcd-member-ip-10-0-135-108.ec2.internal ClientURLs:[https://10.0.135.108:2379]} to cluster 7258b06eb20735e\n2019-04-29 06:42:49.046817 I | embed: serving client requests on [::]:2379\n2019-04-29 06:42:49.203287 I | embed: rejected connection from "127.0.0.1:33542" (error "tls: failed to verify client's certificate: x509: certificate specifies an incompatible key usage", ServerName "")\nproto: no coders for int\nproto: no encoder for ValueSize int [GetProperties]\nWARNING: 2019/04/29 06:42:49 Failed to dial 0.0.0.0:2379: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate"; please retry.\n
Apr 29 06:46:18.889 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-108.ec2.internal node/ip-10-0-135-108.ec2.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0429 06:38:01.616739       1 observer_polling.go:106] Starting file observer\nI0429 06:38:01.617842       1 certsync_controller.go:161] Starting CertSyncer\n
Apr 29 06:46:18.889 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-108.ec2.internal node/ip-10-0-135-108.ec2.internal container=kube-controller-manager-5 container exited with code 255 (Error): gmaps/kube-controller-manager?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nE0429 06:42:49.546578       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0429 06:42:56.735744       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found]\nI0429 06:43:17.307702       1 serving.go:88] Shutting down DynamicLoader\nI0429 06:43:17.307886       1 secure_serving.go:180] Stopped listening on [::]:10257\nE0429 06:43:17.307665       1 controllermanager.go:282] leaderelection lost\n
Apr 29 06:46:31.379 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-172-158.ec2.internal container=prometheus container exited with code 1 (Error): 
Apr 29 06:46:46.436 E clusteroperator/authentication changed Degraded to True: DegradedOperatorSyncLoopError: Degraded: failed handling the route: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io integrated-oauth-server)
Apr 29 06:46:56.838 E ns/openshift-operator-lifecycle-manager pod/packageserver-6dc8588477-c72k7 node/ip-10-0-148-116.ec2.internal container=packageserver container exited with code 137 (Error): 
Apr 29 06:46:57.569 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 
Apr 29 06:46:57.569 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.ec2.internal container=rules-configmap-reloader container exited with code 2 (OOMKilled): 
Apr 29 06:46:57.569 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.ec2.internal container=prometheus-config-reloader container exited with code 2 (OOMKilled): 
Apr 29 06:47:08.644 E ns/openshift-operator-lifecycle-manager pod/packageserver-6dc8588477-nbpn2 node/ip-10-0-173-108.ec2.internal container=packageserver container exited with code 137 (Error): 
Apr 29 06:47:12.777 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.ec2.internal container=prometheus container exited with code 1 (Error): 
Apr 29 06:47:33.568 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-172-158.ec2.internal container=prometheus container exited with code 1 (Error): 
Apr 29 06:49:19.475 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 29 06:49:42.353 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-85dfbb5fc-8tmpk node/ip-10-0-148-116.ec2.internal container=guard container exited with code 255 (Error): 
Apr 29 06:49:47.562 E ns/openshift-cluster-node-tuning-operator pod/tuned-x5lws node/ip-10-0-131-108.ec2.internal container=tuned container exited with code 255 (Error): 
Apr 29 06:49:47.590 E ns/openshift-monitoring pod/node-exporter-vsbhn node/ip-10-0-131-108.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 29 06:49:47.590 E ns/openshift-monitoring pod/node-exporter-vsbhn node/ip-10-0-131-108.ec2.internal container=node-exporter container exited with code 255 (Error): 
Apr 29 06:49:47.647 E ns/openshift-image-registry pod/image-registry-85d744ff45-b2nsp node/ip-10-0-131-108.ec2.internal container=registry container exited with code 255 (Error): 
Apr 29 06:49:47.808 E ns/openshift-ingress pod/router-default-7ff89986f6-2bnvj node/ip-10-0-131-108.ec2.internal container=router container exited with code 255 (Error): :46:56.560036       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:47:01.550733       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:47:06.543593       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:47:14.026880       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:47:19.000451       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:47:34.811000       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:47:39.804982       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:48:02.407758       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:48:10.106703       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:48:40.400487       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:48:45.376922       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nW0429 06:48:47.691250       1 reflector.go:341] github.com/openshift/router/pkg/router/template/service_lookup.go:32: watch of *v1.Service ended with: very short watch: github.com/openshift/router/pkg/router/template/service_lookup.go:32: Unexpected watch close - watch lasted less than a second and no items received\n
Apr 29 06:49:53.211 E ns/openshift-image-registry pod/node-ca-jhcwt node/ip-10-0-131-108.ec2.internal container=node-ca container exited with code 255 (Error): 
Apr 29 06:49:53.891 E ns/openshift-monitoring pod/node-exporter-nlgrk node/ip-10-0-148-116.ec2.internal container=kube-rbac-proxy container exited with code 255 (Error): 
Apr 29 06:49:53.891 E ns/openshift-monitoring pod/node-exporter-nlgrk node/ip-10-0-148-116.ec2.internal container=node-exporter container exited with code 255 (Error): 
Apr 29 06:49:54.893 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=kube-controller-manager-5 container exited with code 255 (Error): /v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0429 06:44:54.859223       1 webhook.go:106] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0429 06:44:54.859264       1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0429 06:44:59.937925       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0429 06:45:15.704210       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nE0429 06:45:24.909281       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0429 06:45:25.013690       1 webhook.go:106] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\nE0429 06:45:25.013863       1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope]\n
Apr 29 06:49:54.893 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0429 06:36:58.826251       1 observer_polling.go:106] Starting file observer\nI0429 06:36:58.827173       1 certsync_controller.go:161] Starting CertSyncer\n
Apr 29 06:49:55.410 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-131-108.ec2.internal container=alertmanager-proxy container exited with code 255 (Error): 
Apr 29 06:49:55.410 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-131-108.ec2.internal container=alertmanager container exited with code 255 (Error): 
Apr 29 06:49:55.410 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-131-108.ec2.internal container=config-reloader container exited with code 255 (Error): 
Apr 29 06:49:56.008 E ns/openshift-monitoring pod/prometheus-adapter-55fdcd6bcd-f82rc node/ip-10-0-131-108.ec2.internal container=prometheus-adapter container exited with code 255 (Error): 
Apr 29 06:49:56.411 E ns/openshift-monitoring pod/prometheus-operator-5b9cd568bb-pkgld node/ip-10-0-131-108.ec2.internal container=prometheus-operator container exited with code 255 (Error): 
Apr 29 06:49:57.090 E ns/openshift-dns pod/dns-default-ngjv5 node/ip-10-0-148-116.ec2.internal container=dns container exited with code 255 (Error): 
Apr 29 06:49:57.090 E ns/openshift-dns pod/dns-default-ngjv5 node/ip-10-0-148-116.ec2.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 29 06:50:07.290 E ns/openshift-controller-manager pod/controller-manager-b6cxh node/ip-10-0-148-116.ec2.internal container=controller-manager container exited with code 255 (Error): 
Apr 29 06:50:07.409 E ns/openshift-dns pod/dns-default-77drg node/ip-10-0-131-108.ec2.internal container=dns-node-resolver container exited with code 255 (Error): 
Apr 29 06:50:07.409 E ns/openshift-dns pod/dns-default-77drg node/ip-10-0-131-108.ec2.internal container=dns container exited with code 255 (Error): 
Apr 29 06:50:10.267 E clusteroperator/monitoring changed Degraded to True: Failed to rollout the stack. Error: running task Updating node-exporter failed: reconciling node-exporter SecurityContextConstraints failed: updating SecurityContextConstraints object failed: the server is currently unable to handle the request (put securitycontextconstraints.security.openshift.io node-exporter)
Apr 29 06:50:12.490 E ns/openshift-apiserver pod/apiserver-k29js node/ip-10-0-148-116.ec2.internal container=openshift-apiserver container exited with code 255 (Error): e addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:48:44.837069       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0429 06:48:44.837257       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:48:44.837554       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0429 06:48:44.837600       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:48:44.853338       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:48:45.031889       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0429 06:48:45.032334       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:48:45.032904       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0429 06:48:45.032999       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:48:45.060374       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:48:45.205074       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []\nI0429 06:48:45.205273       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:48:45.205329       1 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0429 06:48:45.205367       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\nI0429 06:48:45.226699       1 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{etcd.openshift-etcd.svc:2379 <nil>}]\n
Apr 29 06:50:13.490 E ns/openshift-image-registry pod/node-ca-29ptk node/ip-10-0-148-116.ec2.internal container=node-ca container exited with code 255 (Error): 
Apr 29 06:50:32.291 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=kube-apiserver-6 container exited with code 255 (Error): 8642       1 controller.go:107] OpenAPI AggregationController: Processing item v1.template.openshift.io\nI0429 06:47:48.086161       1 controller.go:107] OpenAPI AggregationController: Processing item v1.apps.openshift.io\nI0429 06:47:48.477955       1 controller.go:107] OpenAPI AggregationController: Processing item v1.route.openshift.io\nI0429 06:47:50.578932       1 controller.go:107] OpenAPI AggregationController: Processing item v1.authorization.openshift.io\nI0429 06:47:54.069167       1 controller.go:107] OpenAPI AggregationController: Processing item v1.image.openshift.io\nI0429 06:47:57.483885       1 controller.go:107] OpenAPI AggregationController: Processing item v1.project.openshift.io\nI0429 06:48:00.284823       1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io\nI0429 06:48:00.567850       1 controller.go:107] OpenAPI AggregationController: Processing item v1.security.openshift.io\nI0429 06:48:02.219651       1 controller.go:107] OpenAPI AggregationController: Processing item v1.build.openshift.io\nI0429 06:48:03.392938       1 controller.go:107] OpenAPI AggregationController: Processing item v1.packages.operators.coreos.com\nI0429 06:48:36.576688       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io\nE0429 06:48:36.580141       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist\nI0429 06:48:36.580170       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.\nI0429 06:48:41.825358       1 controller.go:107] OpenAPI AggregationController: Processing item v1.quota.openshift.io\nI0429 06:48:42.212581       1 controller.go:107] OpenAPI AggregationController: Processing item v1.user.openshift.io\nI0429 06:48:44.150292       1 controller.go:107] OpenAPI AggregationController: Processing item v1.template.openshift.io\nI0429 06:48:47.276077       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Apr 29 06:50:32.291 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=kube-apiserver-cert-syncer-6 container exited with code 255 (Error): I0429 06:36:14.674594       1 observer_polling.go:106] Starting file observer\nI0429 06:36:14.674912       1 certsync_controller.go:161] Starting CertSyncer\n
Apr 29 06:50:34.293 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=scheduler container exited with code 255 (Error):       1 cache.go:293] Pod d62c51c9-6a4a-11e9-88ff-1200e197abf0 was assumed to be on ip-10-0-148-116.ec2.internal but got added to ip-10-0-135-108.ec2.internal\nI0429 06:48:46.505659       1 scheduler.go:491] Failed to bind pod: openshift-service-ca/apiservice-cabundle-injector-7dd97dc664-wm5lr\nE0429 06:48:46.505678       1 scheduler.go:493] scheduler cache ForgetPod failed: pod d62c51c9-6a4a-11e9-88ff-1200e197abf0 was assumed on ip-10-0-135-108.ec2.internal but assigned to ip-10-0-148-116.ec2.internal\nE0429 06:48:46.505690       1 factory.go:1519] Error scheduling openshift-service-ca/apiservice-cabundle-injector-7dd97dc664-wm5lr: Operation cannot be fulfilled on pods/binding "apiservice-cabundle-injector-7dd97dc664-wm5lr": pod apiservice-cabundle-injector-7dd97dc664-wm5lr is already assigned to node "ip-10-0-148-116.ec2.internal"; retrying\nE0429 06:48:46.521799       1 scheduler.go:598] error binding pod: Operation cannot be fulfilled on pods/binding "apiservice-cabundle-injector-7dd97dc664-wm5lr": pod apiservice-cabundle-injector-7dd97dc664-wm5lr is already assigned to node "ip-10-0-148-116.ec2.internal"\nI0429 06:48:46.728326       1 scheduler.go:491] Failed to bind pod: openshift-cloud-credential-operator/cloud-credential-operator-797b5d88dc-qt9zd\nE0429 06:48:46.728353       1 scheduler.go:493] scheduler cache ForgetPod failed: pod d64bbd14-6a4a-11e9-88ff-1200e197abf0 wasn't assumed so cannot be forgotten\nE0429 06:48:46.728365       1 factory.go:1519] Error scheduling openshift-cloud-credential-operator/cloud-credential-operator-797b5d88dc-qt9zd: Operation cannot be fulfilled on pods/binding "cloud-credential-operator-797b5d88dc-qt9zd": pod cloud-credential-operator-797b5d88dc-qt9zd is already assigned to node "ip-10-0-173-108.ec2.internal"; retrying\nE0429 06:48:46.747648       1 scheduler.go:598] error binding pod: Operation cannot be fulfilled on pods/binding "cloud-credential-operator-797b5d88dc-qt9zd": pod cloud-credential-operator-797b5d88dc-qt9zd is already assigned to node "ip-10-0-173-108.ec2.internal"\n
Apr 29 06:50:34.691 E ns/openshift-etcd pod/etcd-member-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=etcd-member container exited with code 255 (Error): 09.716109 I | rafthttp: established a TCP streaming connection with peer 617d0e8440ef7c62 (stream MsgApp v2 writer)\n2019-04-29 06:48:09.716184 I | embed: listening for metrics on https://0.0.0.0:9978\n2019-04-29 06:48:09.716423 I | rafthttp: peer 6a4e06c0e09bcdc4 became active\n2019-04-29 06:48:09.716449 I | rafthttp: established a TCP streaming connection with peer 6a4e06c0e09bcdc4 (stream Message writer)\n2019-04-29 06:48:09.716682 I | rafthttp: established a TCP streaming connection with peer 6a4e06c0e09bcdc4 (stream MsgApp v2 writer)\n2019-04-29 06:48:09.716852 I | rafthttp: established a TCP streaming connection with peer 617d0e8440ef7c62 (stream Message writer)\n2019-04-29 06:48:09.754085 I | rafthttp: established a TCP streaming connection with peer 6a4e06c0e09bcdc4 (stream Message reader)\n2019-04-29 06:48:09.757968 I | rafthttp: established a TCP streaming connection with peer 617d0e8440ef7c62 (stream Message reader)\n2019-04-29 06:48:09.758843 I | rafthttp: established a TCP streaming connection with peer 6a4e06c0e09bcdc4 (stream MsgApp v2 reader)\n2019-04-29 06:48:09.759398 I | rafthttp: established a TCP streaming connection with peer 617d0e8440ef7c62 (stream MsgApp v2 reader)\n2019-04-29 06:48:09.760697 I | etcdserver: ba63a4fffcc9458d initialzed peer connection; fast-forwarding 8 ticks (election ticks 10) with 2 active peer(s)\n2019-04-29 06:48:09.896780 I | etcdserver: published {Name:etcd-member-ip-10-0-148-116.ec2.internal ClientURLs:[https://10.0.148.116:2379]} to cluster 7258b06eb20735e\n2019-04-29 06:48:09.897014 I | embed: ready to serve client requests\n2019-04-29 06:48:09.903379 I | embed: serving client requests on [::]:2379\n2019-04-29 06:48:09.924719 I | embed: rejected connection from "127.0.0.1:55496" (error "tls: failed to verify client's certificate: x509: certificate specifies an incompatible key usage", ServerName "")\nWARNING: 2019/04/29 06:48:09 Failed to dial 0.0.0.0:2379: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate"; please retry.\n
Apr 29 06:50:34.691 E ns/openshift-etcd pod/etcd-member-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=etcd-metrics container exited with code 255 (Error): 2019-04-29 06:48:09.737145 I | etcdmain: ServerTLS: cert = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-metric:etcd-1.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/metric-ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2019-04-29 06:48:09.740565 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2019-04-29 06:48:09.741850 I | etcdmain: ClientTLS: cert = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.crt, key = /etc/ssl/etcd/system:etcd-peer:etcd-1.ci-op-bzjc847m-77109.origin-ci-int-aws.dev.rhcloud.com.key, ca = /etc/ssl/etcd/ca.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2019-04-29 06:48:09.777685 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Apr 29 06:50:35.092 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=kube-controller-manager-5 container exited with code 255 (Error): /v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0429 06:44:54.859223       1 webhook.go:106] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0429 06:44:54.859264       1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0429 06:44:59.937925       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0429 06:45:15.704210       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nE0429 06:45:24.909281       1 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0429 06:45:25.013690       1 webhook.go:106] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\nE0429 06:45:25.013863       1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope]\n
Apr 29 06:50:35.092 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=kube-controller-manager-cert-syncer-5 container exited with code 255 (Error): I0429 06:36:58.826251       1 observer_polling.go:106] Starting file observer\nI0429 06:36:58.827173       1 certsync_controller.go:161] Starting CertSyncer\n
Apr 29 06:50:37.292 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=kube-apiserver-6 container exited with code 255 (Error): 8642       1 controller.go:107] OpenAPI AggregationController: Processing item v1.template.openshift.io\nI0429 06:47:48.086161       1 controller.go:107] OpenAPI AggregationController: Processing item v1.apps.openshift.io\nI0429 06:47:48.477955       1 controller.go:107] OpenAPI AggregationController: Processing item v1.route.openshift.io\nI0429 06:47:50.578932       1 controller.go:107] OpenAPI AggregationController: Processing item v1.authorization.openshift.io\nI0429 06:47:54.069167       1 controller.go:107] OpenAPI AggregationController: Processing item v1.image.openshift.io\nI0429 06:47:57.483885       1 controller.go:107] OpenAPI AggregationController: Processing item v1.project.openshift.io\nI0429 06:48:00.284823       1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io\nI0429 06:48:00.567850       1 controller.go:107] OpenAPI AggregationController: Processing item v1.security.openshift.io\nI0429 06:48:02.219651       1 controller.go:107] OpenAPI AggregationController: Processing item v1.build.openshift.io\nI0429 06:48:03.392938       1 controller.go:107] OpenAPI AggregationController: Processing item v1.packages.operators.coreos.com\nI0429 06:48:36.576688       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io\nE0429 06:48:36.580141       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist\nI0429 06:48:36.580170       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.\nI0429 06:48:41.825358       1 controller.go:107] OpenAPI AggregationController: Processing item v1.quota.openshift.io\nI0429 06:48:42.212581       1 controller.go:107] OpenAPI AggregationController: Processing item v1.user.openshift.io\nI0429 06:48:44.150292       1 controller.go:107] OpenAPI AggregationController: Processing item v1.template.openshift.io\nI0429 06:48:47.276077       1 controller.go:176] Shutting down kubernetes service endpoint reconciler\n
Apr 29 06:50:37.292 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-148-116.ec2.internal node/ip-10-0-148-116.ec2.internal container=kube-apiserver-cert-syncer-6 container exited with code 255 (Error): I0429 06:36:14.674594       1 observer_polling.go:106] Starting file observer\nI0429 06:36:14.674912       1 certsync_controller.go:161] Starting CertSyncer\n
Apr 29 06:51:08.933 E ns/openshift-operator-lifecycle-manager pod/packageserver-687f7cd879-7949w node/ip-10-0-173-108.ec2.internal container=packageserver container exited with code 137 (Error): 
Apr 29 06:51:28.041 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-85dfbb5fc-clxc9 node/ip-10-0-148-116.ec2.internal container=guard container exited with code 137 (Error): 
Apr 29 06:52:05.486 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-85dfbb5fc-mntz7 node/ip-10-0-135-108.ec2.internal container=guard container exited with code 137 (Error): 
Apr 29 06:53:20.259 E ns/openshift-monitoring pod/node-exporter-5mmsn node/ip-10-0-173-108.ec2.internal container=node-exporter container exited with code 143 (Error): 
Apr 29 06:53:21.858 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-7cc7f66895-b58qp node/ip-10-0-173-108.ec2.internal container=cluster-node-tuning-operator container exited with code 255 (Error): 
Apr 29 06:53:28.971 E ns/openshift-cluster-node-tuning-operator pod/tuned-575s8 node/ip-10-0-148-116.ec2.internal container=tuned container exited with code 143 (Error): 
Apr 29 06:53:29.956 E ns/openshift-marketplace pod/community-operators-6dcd7446b7-wl8q5 node/ip-10-0-172-158.ec2.internal container=community-operators container exited with code 2 (Error): 
Apr 29 06:53:31.426 E clusteroperator/marketplace changed Degraded to True: Operator exited
Apr 29 06:53:34.969 E ns/openshift-cluster-node-tuning-operator pod/tuned-cwt4t node/ip-10-0-157-63.ec2.internal container=tuned container exited with code 143 (Error): 
Apr 29 06:53:41.208 E ns/openshift-monitoring pod/node-exporter-986cb node/ip-10-0-157-63.ec2.internal container=node-exporter container exited with code 143 (Error): 
Apr 29 06:53:41.678 E ns/openshift-monitoring pod/telemeter-client-66dc7947fd-hdwct node/ip-10-0-172-158.ec2.internal container=telemeter-client container exited with code 2 (Error): 
Apr 29 06:53:41.678 E ns/openshift-monitoring pod/telemeter-client-66dc7947fd-hdwct node/ip-10-0-172-158.ec2.internal container=reload container exited with code 2 (Error): 
Apr 29 06:53:42.677 E ns/openshift-cluster-node-tuning-operator pod/tuned-gbd28 node/ip-10-0-172-158.ec2.internal container=tuned container exited with code 143 (Error): 
Apr 29 06:53:47.362 E ns/openshift-monitoring pod/node-exporter-nlgrk node/ip-10-0-148-116.ec2.internal container=node-exporter container exited with code 143 (Error): 
Apr 29 06:53:49.076 E ns/openshift-monitoring pod/prometheus-adapter-55fdcd6bcd-ncz4s node/ip-10-0-172-158.ec2.internal container=prometheus-adapter container exited with code 2 (Error): 
Apr 29 06:53:54.567 E ns/openshift-monitoring pod/grafana-6d765cbddc-hv4xv node/ip-10-0-157-63.ec2.internal container=grafana-proxy container exited with code 2 (Error): 
Apr 29 06:53:54.846 E ns/openshift-cluster-node-tuning-operator pod/tuned-x5lws node/ip-10-0-131-108.ec2.internal container=tuned container exited with code 143 (Error): 
Apr 29 06:54:02.220 E ns/openshift-ingress pod/router-default-7ff89986f6-frr9t node/ip-10-0-172-158.ec2.internal container=router container exited with code 2 (Error): 06:50:38.400241       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:50:43.383398       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:53:02.169104       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:53:12.380913       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:53:17.369156       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:53:22.368020       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:53:27.363428       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:53:32.391011       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:53:37.369248       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:53:42.369529       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:53:47.409217       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:53:52.368326       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0429 06:53:57.364355       1 router.go:482] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Apr 29 06:54:02.335 E ns/openshift-cluster-node-tuning-operator pod/tuned-wjckt node/ip-10-0-173-108.ec2.internal container=tuned container exited with code 143 (Error): 
Apr 29 06:54:09.250 E ns/openshift-monitoring pod/node-exporter-tjhkc node/ip-10-0-135-108.ec2.internal container=node-exporter container exited with code 143 (Error): 
Apr 29 06:54:15.366 E ns/openshift-image-registry pod/image-registry-85d744ff45-88ft2 node/ip-10-0-172-158.ec2.internal container=registry container exited with code 137 (Error): 
Apr 29 06:54:24.945 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-157-63.ec2.internal container=prometheus container exited with code 1 (Error): 
Apr 29 06:54:41.468 E ns/openshift-image-registry pod/node-ca-nhhd5 node/ip-10-0-173-108.ec2.internal container=node-ca container exited with code 137 (Error): 
Apr 29 06:55:23.663 E ns/openshift-image-registry pod/node-ca-29ptk node/ip-10-0-148-116.ec2.internal container=node-ca container exited with code 137 (Error): 
Apr 29 06:56:05.718 E ns/openshift-image-registry pod/node-ca-nhj28 node/ip-10-0-172-158.ec2.internal container=node-ca container exited with code 137 (Error): 
Apr 29 06:56:21.417 E ns/openshift-console pod/console-6b6d7fd7-8lrw5 node/ip-10-0-135-108.ec2.internal container=console container exited with code 2 (Error): 
Apr 29 06:56:31.845 E ns/openshift-console pod/console-6b6d7fd7-c6kbr node/ip-10-0-173-108.ec2.internal container=console container exited with code 2 (Error): 
Apr 29 06:56:52.355 E ns/openshift-image-registry pod/node-ca-jhcwt node/ip-10-0-131-108.ec2.internal container=node-ca container exited with code 137 (Error): 
Apr 29 06:57:40.160 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-64c9b58b7-22jdm node/ip-10-0-173-108.ec2.internal container=operator container exited with code 2 (Error): 
Apr 29 06:58:39.835 E ns/openshift-controller-manager pod/controller-manager-fbj8n node/ip-10-0-135-108.ec2.internal container=controller-manager container exited with code 137 (Error): 
Apr 29 06:59:23.470 E ns/openshift-controller-manager pod/controller-manager-qt2t5 node/ip-10-0-173-108.ec2.internal container=controller-manager container exited with code 137 (Error): 
Apr 29 07:00:04.525 E ns/openshift-controller-manager pod/controller-manager-s75v8 node/ip-10-0-148-116.ec2.internal container=controller-manager container exited with code 137 (Error):