ResultSUCCESS
Tests 1 failed / 101 succeeded
Started2020-09-19 23:49
Elapsed2h3m
Work namespaceci-op-0pic2j6p
pod4.6.0-0.nightly-2020-09-19-234549-azure-serial
revision1

Test Failures


openshift-tests [sig-arch] Monitor cluster while tests execute 1h2m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-arch\]\sMonitor\scluster\swhile\stests\sexecute$'
40 error level events were detected during this test run:

Sep 20 00:59:07.624 E clusteroperator/dns changed Degraded to True: DNSDegraded: DNS default is degraded
Sep 20 00:59:16.729 E ns/openshift-sdn pod/ovs-ffp2b node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus3-5m62s container/openvswitch container exited with code 1 (Error): Failed to connect to bus: No data available\nopenvswitch is running in container\n/etc/openvswitch/conf.db does not exist ... (warning).\nCreating empty database /etc/openvswitch/conf.db.\novsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process), aborting\nStarting ovsdb-server ... failed!\n
Sep 20 00:59:24.839 E ns/openshift-sdn pod/ovs-89gxp node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-r8wrr container/openvswitch container exited with code 1 (Error): Failed to connect to bus: No data available\nopenvswitch is running in container\n/etc/openvswitch/conf.db does not exist ... (warning).\nCreating empty database /etc/openvswitch/conf.db.\novsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process), aborting\nStarting ovsdb-server ... failed!\n
Sep 20 00:59:29.796 E ns/openshift-sdn pod/sdn-lhqgv node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus3-5m62s container/sdn container exited with code 255 (Error): fig/kube-proxy-config.yaml\nI0920 00:59:15.800520    2164 feature_gate.go:243] feature gates: &{map[]}\nI0920 00:59:15.800587    2164 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0920 00:59:15.800640    2164 cmd.go:216] Watching config file /config/..2020_09_20_00_59_02.139525631/kube-proxy-config.yaml for changes\nI0920 00:59:15.904276    2164 node.go:152] Initializing SDN node "ci-op-0pic2j6p-a2713-qzptm-worker-centralus3-5m62s" (10.0.32.9) of type "redhat/openshift-ovs-networkpolicy"\nI0920 00:59:15.904689    2164 cmd.go:159] Starting node networking (v0.0.0-alpha.0-212-g60f209a2)\nI0920 00:59:15.904710    2164 node.go:340] Starting openshift-sdn network plugin\nI0920 00:59:16.333854    2164 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0920 00:59:16.417096    2164 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:16.926262    2164 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:17.559579    2164 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:18.347873    2164 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:19.329490    2164 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:20.556331    2164 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:22.087561    2164 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:24.001644    2164 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:26.391541    2164 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:29.378519    2164 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0920 00:59:29.378556    2164 cmd.go:111] Failed to start sdn: node SDN setup failed: timed out waiting for the condition\n
Sep 20 00:59:32.830 E ns/openshift-sdn pod/sdn-lhqgv node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus3-5m62s container/sdn container exited with code 255 (Error): I0920 00:59:31.513154    3109 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0920 00:59:31.514627    3109 feature_gate.go:243] feature gates: &{map[]}\nI0920 00:59:31.514688    3109 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0920 00:59:31.514737    3109 cmd.go:216] Watching config file /config/..2020_09_20_00_59_02.139525631/kube-proxy-config.yaml for changes\nI0920 00:59:31.547276    3109 node.go:152] Initializing SDN node "ci-op-0pic2j6p-a2713-qzptm-worker-centralus3-5m62s" (10.0.32.9) of type "redhat/openshift-ovs-networkpolicy"\nI0920 00:59:31.547607    3109 cmd.go:159] Starting node networking (v0.0.0-alpha.0-212-g60f209a2)\nI0920 00:59:31.547630    3109 node.go:340] Starting openshift-sdn network plugin\nI0920 00:59:31.682332    3109 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nF0920 00:59:32.103223    3109 cmd.go:111] Failed to start sdn: node SDN setup failed: Link not found\n
Sep 20 00:59:37.802 E ns/openshift-sdn pod/sdn-lnxv9 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-r8wrr container/sdn container exited with code 255 (Error): fig/kube-proxy-config.yaml\nI0920 00:59:24.232957    2221 feature_gate.go:243] feature gates: &{map[]}\nI0920 00:59:24.233071    2221 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0920 00:59:24.233142    2221 cmd.go:216] Watching config file /config/..2020_09_20_00_59_07.610931410/kube-proxy-config.yaml for changes\nI0920 00:59:24.319031    2221 node.go:152] Initializing SDN node "ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-r8wrr" (10.0.32.7) of type "redhat/openshift-ovs-networkpolicy"\nI0920 00:59:24.319280    2221 cmd.go:159] Starting node networking (v0.0.0-alpha.0-212-g60f209a2)\nI0920 00:59:24.319312    2221 node.go:340] Starting openshift-sdn network plugin\nI0920 00:59:24.584180    2221 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0920 00:59:24.642028    2221 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:25.147385    2221 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:25.777751    2221 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:26.564951    2221 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:27.547443    2221 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:28.774257    2221 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:30.306097    2221 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:32.219190    2221 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:34.608342    2221 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:37.594217    2221 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0920 00:59:37.594262    2221 cmd.go:111] Failed to start sdn: node SDN setup failed: timed out waiting for the condition\n
Sep 20 00:59:39.834 E ns/openshift-sdn pod/sdn-lnxv9 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-r8wrr container/sdn container exited with code 255 (Error): I0920 00:59:38.273378    3208 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0920 00:59:38.275610    3208 feature_gate.go:243] feature gates: &{map[]}\nI0920 00:59:38.275677    3208 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0920 00:59:38.275728    3208 cmd.go:216] Watching config file /config/..2020_09_20_00_59_07.610931410/kube-proxy-config.yaml for changes\nI0920 00:59:38.309000    3208 node.go:152] Initializing SDN node "ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-r8wrr" (10.0.32.7) of type "redhat/openshift-ovs-networkpolicy"\nI0920 00:59:38.309277    3208 cmd.go:159] Starting node networking (v0.0.0-alpha.0-212-g60f209a2)\nI0920 00:59:38.309297    3208 node.go:340] Starting openshift-sdn network plugin\nI0920 00:59:38.433249    3208 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0920 00:59:39.072036    3208 node.go:389] Starting openshift-sdn pod manager\nI0920 00:59:39.246111    3208 node.go:247] Checking default interface MTU\nF0920 00:59:39.251234    3208 healthcheck.go:99] SDN healthcheck detected unhealthy OVS server, restarting: Link not found\n
Sep 20 00:59:48.124 E clusteroperator/network changed Degraded to True: RolloutHung: DaemonSet "openshift-sdn/sdn" rollout is not making progress - pod sdn-lhqgv is in CrashLoopBackOff State
Sep 20 00:59:49.926 E ns/openshift-sdn pod/ovs-djqrv node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qzn9j container/openvswitch container exited with code 1 (Error): Failed to connect to bus: No data available\nopenvswitch is running in container\n/etc/openvswitch/conf.db does not exist ... (warning).\nCreating empty database /etc/openvswitch/conf.db.\novsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process), aborting\nStarting ovsdb-server ... failed!\n
Sep 20 00:59:54.979 E ns/openshift-sdn pod/sdn-lhqgv node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus3-5m62s container/kube-rbac-proxy container exited with code 1 (Error): -  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0curl: (7) Failed to connect to 172.30.0.1 port 443: No route to host\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\n  File "/usr/lib64/python3.6/json/__init__.py", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads\n    return _default_decoder.decode(s)\n  File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode\n    raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n
Sep 20 01:00:02.890 E ns/openshift-sdn pod/sdn-lnxv9 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-r8wrr container/kube-rbac-proxy container exited with code 1 (Error):    0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0curl: (7) Failed to connect to 172.30.0.1 port 443: No route to host\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\n  File "/usr/lib64/python3.6/json/__init__.py", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads\n    return _default_decoder.decode(s)\n  File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode\n    raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n
Sep 20 01:00:02.932 E ns/openshift-sdn pod/sdn-h7skt node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qzn9j container/sdn container exited with code 255 (Error): fig/kube-proxy-config.yaml\nI0920 00:59:49.432706    2261 feature_gate.go:243] feature gates: &{map[]}\nI0920 00:59:49.436365    2261 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0920 00:59:49.436433    2261 cmd.go:216] Watching config file /config/..2020_09_20_00_59_32.984157586/kube-proxy-config.yaml for changes\nI0920 00:59:49.526370    2261 node.go:152] Initializing SDN node "ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qzn9j" (10.0.32.8) of type "redhat/openshift-ovs-networkpolicy"\nI0920 00:59:49.526764    2261 cmd.go:159] Starting node networking (v0.0.0-alpha.0-212-g60f209a2)\nI0920 00:59:49.526781    2261 node.go:340] Starting openshift-sdn network plugin\nI0920 00:59:49.793036    2261 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0920 00:59:49.860955    2261 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:50.365622    2261 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:50.995591    2261 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:51.783030    2261 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:52.764875    2261 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:53.991661    2261 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:55.523154    2261 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:57.435859    2261 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 00:59:59.826653    2261 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0920 01:00:02.812432    2261 ovs.go:158] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0920 01:00:02.812487    2261 cmd.go:111] Failed to start sdn: node SDN setup failed: timed out waiting for the condition\n
Sep 20 01:00:04.974 E ns/openshift-sdn pod/sdn-h7skt node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qzn9j container/sdn container exited with code 255 (Error): I0920 01:00:03.161232    3219 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0920 01:00:03.163664    3219 feature_gate.go:243] feature gates: &{map[]}\nI0920 01:00:03.163730    3219 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0920 01:00:03.163772    3219 cmd.go:216] Watching config file /config/..2020_09_20_00_59_32.984157586/kube-proxy-config.yaml for changes\nI0920 01:00:03.224009    3219 node.go:152] Initializing SDN node "ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qzn9j" (10.0.32.8) of type "redhat/openshift-ovs-networkpolicy"\nI0920 01:00:03.224304    3219 cmd.go:159] Starting node networking (v0.0.0-alpha.0-212-g60f209a2)\nI0920 01:00:03.224320    3219 node.go:340] Starting openshift-sdn network plugin\nI0920 01:00:03.353332    3219 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0920 01:00:03.976316    3219 node.go:389] Starting openshift-sdn pod manager\nI0920 01:00:04.143485    3219 node.go:247] Checking default interface MTU\nF0920 01:00:04.147520    3219 healthcheck.go:99] SDN healthcheck detected unhealthy OVS server, restarting: Link not found\n
Sep 20 01:00:20.205 E clusteroperator/network changed Degraded to True: RolloutHung: DaemonSet "openshift-sdn/sdn" rollout is not making progress - pod sdn-h7skt is in CrashLoopBackOff State
Sep 20 01:00:28.064 E ns/openshift-sdn pod/sdn-h7skt node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qzn9j container/kube-rbac-proxy container exited with code 1 (Error): 0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0curl: (7) Failed to connect to 172.30.0.1 port 443: No route to host\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\n  File "/usr/lib64/python3.6/json/__init__.py", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads\n    return _default_decoder.decode(s)\n  File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode\n    raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n
Sep 20 01:00:41.419 E ns/openshift-kube-storage-version-migrator pod/migrator-786d87bc47-6bjz6 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-hz7w6 container/migrator container exited with code 2 (Error): I0920 00:26:17.539622       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0920 00:26:17.539734       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0920 00:26:17.539741       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0920 00:26:17.539749       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0920 00:26:17.539754       1 migrator.go:18] FLAG: --kubeconfig=""\nI0920 00:26:17.539758       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0920 00:26:17.539764       1 migrator.go:18] FLAG: --log_dir=""\nI0920 00:26:17.539767       1 migrator.go:18] FLAG: --log_file=""\nI0920 00:26:17.539770       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0920 00:26:17.539774       1 migrator.go:18] FLAG: --logtostderr="true"\nI0920 00:26:17.539777       1 migrator.go:18] FLAG: --skip_headers="false"\nI0920 00:26:17.539780       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0920 00:26:17.539783       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0920 00:26:17.539786       1 migrator.go:18] FLAG: --v="2"\nI0920 00:26:17.539790       1 migrator.go:18] FLAG: --vmodule=""\nI0920 00:26:17.541854       1 reflector.go:175] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\nI0920 00:27:50.718296       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0920 00:31:31.090394       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Sep 20 01:00:41.498 E ns/openshift-marketplace pod/redhat-marketplace-4rh2s node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-hz7w6 container/registry-server container exited with code 2 (Error): 
Sep 20 01:00:41.537 E ns/openshift-monitoring pod/thanos-querier-6fff48694f-6dshj node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-hz7w6 container/oauth-proxy container exited with code 2 (Error):  00:26:56 oauthproxy.go:785: basicauth: 10.128.0.4:40574 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:28:56 oauthproxy.go:785: basicauth: 10.128.0.4:47472 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:29:56 oauthproxy.go:785: basicauth: 10.128.0.4:50584 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:34:56 oauthproxy.go:785: basicauth: 10.128.0.4:38582 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:36:56 oauthproxy.go:785: basicauth: 10.128.0.4:44802 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:38:56 oauthproxy.go:785: basicauth: 10.128.0.4:50986 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:40:56 oauthproxy.go:785: basicauth: 10.128.0.4:57530 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:41:56 oauthproxy.go:785: basicauth: 10.128.0.4:60904 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:43:56 oauthproxy.go:785: basicauth: 10.128.0.4:38810 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:46:56 oauthproxy.go:785: basicauth: 10.128.0.4:48140 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:52:56 oauthproxy.go:785: basicauth: 10.128.0.4:38640 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:54:56 oauthproxy.go:785: basicauth: 10.128.0.4:44794 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:55:56 oauthproxy.go:785: basicauth: 10.128.0.4:47888 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:57:56 oauthproxy.go:785: basicauth: 10.128.0.4:54082 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 20 01:00:42.840 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-hz7w6 container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/20 00:26:31 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/09/20 00:26:33 config map updated\n2020/09/20 00:26:33 error: Post "http://localhost:9090/-/reload": dial tcp [::1]:9090: connect: connection refused\n
Sep 20 01:00:42.840 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-hz7w6 container/prometheus-proxy container exited with code 2 (Error): 2020/09/20 00:26:32 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/20 00:26:32 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 00:26:32 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 00:26:32 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/20 00:26:32 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/09/20 00:26:32 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/20 00:26:32 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/09/20 00:26:32 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0920 00:26:32.296953       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/20 00:26:32 http.go:107: HTTPS: listening on [::]:9091\n
Sep 20 01:00:42.840 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-hz7w6 container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-20T00:26:31.232001203Z caller=main.go:87 msg="Starting prometheus-config-reloader version '1.14'."\nlevel=error ts=2020-09-20T00:26:31.237142277Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post \"http://localhost:9090/-/reload\": dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-20T00:26:36.502463895Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-20T00:26:36.502570199Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-09-20T00:26:36.741718312Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-20T00:29:36.720281312Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 20 01:00:54.800 E ns/openshift-operator-lifecycle-manager pod/catalog-operator-84f96b84fb-bdvrz node/ci-op-0pic2j6p-a2713-qzptm-master-1 container/catalog-operator container exited with code 2 (Error): onciler/grpc.go:158 +0xc0\ngithub.com/operator-framework/operator-lifecycle-manager/pkg/controller/operators/catalog.(*Operator).syncRegistryServer(0xc000470000, 0xc00020c690, 0xc00051c000, 0xc00051c000, 0xc000050a01, 0x0, 0x0)\n	/build/pkg/controller/operators/catalog/operator.go:625 +0x326\ngithub.com/operator-framework/operator-lifecycle-manager/pkg/controller/operators/catalog.(*Operator).syncCatalogSources.func1(0xc00051c000, 0xc001300a00, 0x4, 0x4, 0x1, 0x478b42, 0x3)\n	/build/pkg/controller/operators/catalog/operator.go:741 +0x76\ngithub.com/operator-framework/operator-lifecycle-manager/pkg/controller/operators/catalog.(*Operator).syncCatalogSources(0xc000470000, 0x1bed960, 0xc0001b6000, 0xc00020c310, 0x1cf5f30)\n	/build/pkg/controller/operators/catalog/operator.go:788 +0x406\ngithub.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer.LegacySyncHandler.ToSyncerWithDelete.func1(0x1f48580, 0xc00050e100, 0x1f155c0, 0xc0013009e0, 0xc0013009e0, 0x1aa0e40)\n	/build/pkg/lib/queueinformer/queueinformer.go:183 +0x25e\ngithub.com/operator-framework/operator-lifecycle-manager/pkg/lib/kubestate.SyncFunc.Sync(0xc0006fe880, 0x1f48580, 0xc00050e100, 0x1f155c0, 0xc0013009e0, 0xc00020c001, 0x0)\n	/build/pkg/lib/kubestate/kubestate.go:184 +0x4e\ngithub.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer.(*QueueInformer).Sync(...)\n	/build/pkg/lib/queueinformer/queueinformer.go:36\ngithub.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer.(*operator).processNextWorkItem(0xc0006a6d10, 0x1f48580, 0xc00050e100, 0xc000738b40, 0x0)\n	/build/pkg/lib/queueinformer/queueinformer_operator.go:287 +0x330\ngithub.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer.(*operator).worker(0xc0006a6d10, 0x1f48580, 0xc00050e100, 0xc000738b40)\n	/build/pkg/lib/queueinformer/queueinformer_operator.go:231 +0x49\ncreated by github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer.(*operator).start\n	/build/pkg/lib/queueinformer/queueinformer_operator.go:221 +0x446\n
Sep 20 01:01:00.871 E ns/openshift-monitoring pod/thanos-querier-6fff48694f-fgv2f node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qb9ct container/oauth-proxy container exited with code 2 (Error):  00:39:59 oauthproxy.go:785: basicauth: 10.128.0.4:54348 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:42:56 oauthproxy.go:785: basicauth: 10.128.0.4:35780 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:44:56 oauthproxy.go:785: basicauth: 10.128.0.4:41878 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:45:56 oauthproxy.go:785: basicauth: 10.128.0.4:45000 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:47:56 oauthproxy.go:785: basicauth: 10.128.0.4:51240 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:48:56 oauthproxy.go:785: basicauth: 10.128.0.4:54380 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:49:56 oauthproxy.go:785: basicauth: 10.128.0.4:57580 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:50:56 oauthproxy.go:785: basicauth: 10.128.0.4:60672 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:51:56 oauthproxy.go:785: basicauth: 10.128.0.4:35594 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:53:56 oauthproxy.go:785: basicauth: 10.128.0.4:41724 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:56:56 oauthproxy.go:785: basicauth: 10.128.0.4:50994 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:58:56 oauthproxy.go:785: basicauth: 10.128.0.4:57210 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:59:56 oauthproxy.go:785: basicauth: 10.128.0.4:60514 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:00:56 oauthproxy.go:785: basicauth: 10.128.0.4:35938 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 20 01:01:00.908 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qb9ct container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/20 00:26:06 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/09/20 00:27:24 config map updated\n2020/09/20 00:27:25 successfully triggered reload\n
Sep 20 01:01:00.908 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qb9ct container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-20T00:26:06.317745937Z caller=main.go:87 msg="Starting prometheus-config-reloader version '1.14'."\nlevel=error ts=2020-09-20T00:26:06.319526047Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post \"http://localhost:9090/-/reload\": dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-20T00:26:11.52422215Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-20T00:26:11.52429045Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-09-20T00:26:11.71573241Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-20T00:29:11.705198997Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 20 01:01:00.908 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qb9ct container/prometheus-proxy container exited with code 2 (Error): y.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/20 00:26:07 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/09/20 00:26:07 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/20 00:26:07 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/09/20 00:26:07 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/20 00:26:07 http.go:107: HTTPS: listening on [::]:9091\nI0920 00:26:07.046669       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/20 00:26:18 oauthproxy.go:785: basicauth: 10.131.0.17:60782 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:30:48 oauthproxy.go:785: basicauth: 10.131.0.17:36942 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:35:19 oauthproxy.go:785: basicauth: 10.131.0.17:41256 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:39:50 oauthproxy.go:785: basicauth: 10.131.0.17:45046 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:44:20 oauthproxy.go:785: basicauth: 10.131.0.17:48782 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:48:51 oauthproxy.go:785: basicauth: 10.131.0.17:52626 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:53:21 oauthproxy.go:785: basicauth: 10.131.0.17:56320 Authorization header does not start with 'Basic', skipping basic authentication\n20
Sep 20 01:01:00.948 E ns/openshift-monitoring pod/grafana-58646d7db4-kgcs9 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qb9ct container/grafana-proxy container exited with code 2 (Error): 
Sep 20 01:01:01.994 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qb9ct container/config-reloader container exited with code 2 (Error): 2020/09/20 00:25:58 Watching directory: "/etc/alertmanager/config"\n
Sep 20 01:01:01.994 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qb9ct container/alertmanager-proxy container exited with code 2 (Error): 2020/09/20 00:25:59 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 00:25:59 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 00:25:59 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 00:25:59 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/20 00:25:59 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/09/20 00:25:59 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 00:25:59 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\nI0920 00:25:59.431741       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/20 00:25:59 http.go:107: HTTPS: listening on [::]:9095\n
Sep 20 01:01:09.950 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qb9ct container/config-reloader container exited with code 2 (Error): 2020/09/20 00:25:58 Watching directory: "/etc/alertmanager/config"\n
Sep 20 01:01:09.950 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qb9ct container/alertmanager-proxy container exited with code 2 (Error): 2020/09/20 00:25:59 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 00:25:59 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 00:25:59 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 00:25:59 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/20 00:25:59 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/09/20 00:25:59 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 00:25:59 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\nI0920 00:25:59.486420       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/20 00:25:59 http.go:107: HTTPS: listening on [::]:9095\n
Sep 20 01:01:17.236 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus3-5m62s container/prometheus container exited with code 2 (Error): level=error ts=2020-09-20T01:01:02.884Z caller=main.go:285 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 20 01:01:26.242 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus2-qzn9j container/prometheus container exited with code 2 (Error): level=error ts=2020-09-20T01:01:10.681Z caller=main.go:285 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 20 01:01:44.903 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus3-drgc5 container/prometheus container exited with code 2 (Error): level=error ts=2020-09-20T01:01:40.536Z caller=main.go:285 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 20 01:15:22.016 E ns/e2e-daemonsets-2247 pod/daemon-set-djfvx node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus3-drgc5 container/app container exited with code 2 (Error): 
Sep 20 01:21:33.676 E ns/openshift-monitoring pod/kube-state-metrics-5958785f56-fd5mw node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-r8wrr container/kube-state-metrics container exited with code 2 (Error): 
Sep 20 01:31:39.078 E ns/openshift-authentication pod/oauth-openshift-776cd6779d-9vls4 node/ci-op-0pic2j6p-a2713-qzptm-master-2 container/oauth-openshift container exited with code 2 (Error): ver/pkg/oauthserver.(*OAuthServerConfig).buildHandlerChainForOAuth(0xc000503680, 0x2047400, 0xc000213260, 0xc00041efc0, 0x1a1c120, 0xc000bab590)\n	github.com/openshift/oauth-server/pkg/oauthserver/oauth_apiserver.go:307 +0xee\nk8s.io/apiserver/pkg/server.completedConfig.New.func1(0x2047400, 0xc000213260, 0x2047400, 0xc000213260)\n	k8s.io/apiserver@v0.19.0/pkg/server/config.go:534 +0x45\nk8s.io/apiserver/pkg/server.NewAPIServerHandler(0x1cd0522, 0xf, 0x208b520, 0xc0004158c0, 0xc000bab7b8, 0x0, 0x0, 0xc000bab6b0)\n	k8s.io/apiserver@v0.19.0/pkg/server/handler.go:96 +0x284\nk8s.io/apiserver/pkg/server.completedConfig.New(0xc00041efc0, 0x0, 0x0, 0x1cd0522, 0xf, 0x20a3fc0, 0x2f75858, 0xc00041efc0, 0x0, 0x0)\n	k8s.io/apiserver@v0.19.0/pkg/server/config.go:536 +0x124\ngithub.com/openshift/oauth-server/pkg/oauthserver.completedOAuthConfig.New(0xc0002131e0, 0xc000503688, 0x20a3fc0, 0x2f75858, 0x4, 0x2089220, 0xc000112320)\n	github.com/openshift/oauth-server/pkg/oauthserver/oauth_apiserver.go:290 +0x70\ngithub.com/openshift/oauth-server/pkg/cmd/oauth-server.RunOsinServer(0xc0008d2600, 0xc000526de0, 0xcaa, 0xeaa)\n	github.com/openshift/oauth-server/pkg/cmd/oauth-server/server.go:41 +0x89\ngithub.com/openshift/oauth-server/pkg/cmd/oauth-server.(*OsinServer).RunOsinServer(0xc0004067f0, 0xc000526de0, 0xc00090fb48, 0x5ecce0)\n	github.com/openshift/oauth-server/pkg/cmd/oauth-server/cmd.go:91 +0x286\ngithub.com/openshift/oauth-server/pkg/cmd/oauth-server.NewOsinServer.func1(0xc0000d7080, 0xc0002132c0, 0x0, 0x2)\n	github.com/openshift/oauth-server/pkg/cmd/oauth-server/cmd.go:39 +0x109\ngithub.com/spf13/cobra.(*Command).execute(0xc0000d7080, 0xc0002132a0, 0x2, 0x2, 0xc0000d7080, 0xc0002132a0)\n	github.com/spf13/cobra@v1.0.0/command.go:846 +0x29d\ngithub.com/spf13/cobra.(*Command).ExecuteC(0xc0000d6840, 0xc0000d6840, 0x0, 0x0)\n	github.com/spf13/cobra@v1.0.0/command.go:950 +0x349\ngithub.com/spf13/cobra.(*Command).Execute(...)\n	github.com/spf13/cobra@v1.0.0/command.go:887\nmain.main()\n	github.com/openshift/oauth-server/cmd/oauth-server/main.go:41 +0x2cc\n
Sep 20 01:32:43.307 E ns/openshift-authentication pod/oauth-openshift-84c7b9ff77-hqnwl node/ci-op-0pic2j6p-a2713-qzptm-master-2 container/oauth-openshift container exited with code 255 (Error): (...)\n	k8s.io/klog/v2@v2.3.0/klog.go:1443\ngithub.com/openshift/oauth-server/pkg/cmd/oauth-server.NewOsinServer.func1(0xc000629340, 0xc000375e40, 0x0, 0x2)\n	github.com/openshift/oauth-server/pkg/cmd/oauth-server/cmd.go:49 +0x48b\ngithub.com/spf13/cobra.(*Command).execute(0xc000629340, 0xc000375e00, 0x2, 0x2, 0xc000629340, 0xc000375e00)\n	github.com/spf13/cobra@v1.0.0/command.go:846 +0x29d\ngithub.com/spf13/cobra.(*Command).ExecuteC(0xc000629080, 0xc000629080, 0x0, 0x0)\n	github.com/spf13/cobra@v1.0.0/command.go:950 +0x349\ngithub.com/spf13/cobra.(*Command).Execute(...)\n	github.com/spf13/cobra@v1.0.0/command.go:887\nmain.main()\n	github.com/openshift/oauth-server/cmd/oauth-server/main.go:41 +0x2cc\n\ngoroutine 5 [chan receive]:\nk8s.io/klog/v2.(*loggingT).flushDaemon(0x2f4a080)\n	k8s.io/klog/v2@v2.3.0/klog.go:1131 +0x8b\ncreated by k8s.io/klog/v2.init.0\n	k8s.io/klog/v2@v2.3.0/klog.go:416 +0xd6\n\ngoroutine 58 [chan receive]:\nk8s.io/apiserver/pkg/server.SetupSignalContext.func1(0xc00047d8f0)\n	k8s.io/apiserver@v0.19.0/pkg/server/signal.go:48 +0x36\ncreated by k8s.io/apiserver/pkg/server.SetupSignalContext\n	k8s.io/apiserver@v0.19.0/pkg/server/signal.go:47 +0xf3\n\ngoroutine 59 [select]:\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1e64690, 0x2044de0, 0xc0000ef680, 0xc000182201, 0xc0001821e0)\n	k8s.io/apimachinery@v0.19.0/pkg/util/wait/wait.go:167 +0x13f\nk8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1e64690, 0x12a05f200, 0x0, 0x58c101, 0xc0001821e0)\n	k8s.io/apimachinery@v0.19.0/pkg/util/wait/wait.go:133 +0x98\nk8s.io/apimachinery/pkg/util/wait.Until(...)\n	k8s.io/apimachinery@v0.19.0/pkg/util/wait/wait.go:90\nk8s.io/apimachinery/pkg/util/wait.Forever(0x1e64690, 0x12a05f200)\n	k8s.io/apimachinery@v0.19.0/pkg/util/wait/wait.go:81 +0x4f\ncreated by k8s.io/component-base/logs.InitLogs\n	k8s.io/component-base@v0.19.0/logs/logs.go:58 +0x8a\n\ngoroutine 133 [syscall]:\nos/signal.signal_recv(0x46da96)\n	runtime/sigqueue.go:147 +0x9c\nos/signal.loop()\n	os/signal/signal_unix.go:23 +0x22\ncreated by os/signal.Notify.func1\n	os/signal/signal.go:127 +0x44\n
Sep 20 01:33:08.348 E ns/e2e-daemonsets-3795 pod/daemon-set-hrpmv node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-r8wrr reason/Failed (): 
Sep 20 01:35:55.416 E ns/e2e-volumelimits-7155-3119 pod/csi-hostpath-snapshotter-0 node/ci-op-0pic2j6p-a2713-qzptm-worker-centralus1-r8wrr container/csi-snapshotter container exited with code 255 (Error): Lost connection to CSI driver, exiting