ResultSUCCESS
Tests 1 failed / 104 succeeded
Started2020-09-18 01:28
Elapsed1h47m
Work namespaceci-op-m1mlgj10
pod2ad8e222-f94e-11ea-ad59-0a580a810c37
revision1

Test Failures


openshift-tests [sig-arch] Monitor cluster while tests execute 52m32s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-arch\]\sMonitor\scluster\swhile\stests\sexecute$'
36 error level events were detected during this test run:

Sep 18 02:15:44.141 E ns/openshift-monitoring pod/thanos-querier-6f45f9dbf7-9gzdh node/ci-op-m1mlgj10-86b9b-tshnf-worker-b-r599d container/oauth-proxy container exited with code 2 (Error): authproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/18 02:05:49 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/09/18 02:05:49 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/18 02:05:49 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/09/18 02:05:49 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/18 02:05:49 http.go:107: HTTPS: listening on [::]:9091\nI0918 02:05:49.049348       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 02:06:00 oauthproxy.go:785: basicauth: 10.128.0.2:59304 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 02:07:01 oauthproxy.go:785: basicauth: 10.128.0.2:35310 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 02:08:00 oauthproxy.go:785: basicauth: 10.128.0.2:38544 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 02:09:00 oauthproxy.go:785: basicauth: 10.128.0.2:42018 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 02:10:00 oauthproxy.go:785: basicauth: 10.128.0.2:45328 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 02:11:00 oauthproxy.go:785: basicauth: 10.128.0.2:49566 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 02:12:00 oauthproxy.go:785: basicauth: 10.128.0.2:52298 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 02:14:00 oauthproxy.go:785: basicauth: 10.128.0.2:60458 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 02:15:00 oauthproxy.go:785: basicauth: 10.128.0.2:35598 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 18 02:15:45.682 E ns/openshift-marketplace pod/redhat-marketplace-vnkx5 node/ci-op-m1mlgj10-86b9b-tshnf-worker-b-r599d container/registry-server container exited with code 2 (Error): 
Sep 18 02:15:45.884 E ns/openshift-monitoring pod/prometheus-adapter-587cc4b84c-fx4jz node/ci-op-m1mlgj10-86b9b-tshnf-worker-b-r599d container/prometheus-adapter container exited with code 2 (Error): I0918 02:05:17.538895       1 adapter.go:94] successfully using in-cluster auth\nI0918 02:05:18.012245       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0918 02:05:18.012258       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0918 02:05:18.012456       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0918 02:05:18.013763       1 secure_serving.go:178] Serving securely on [::]:6443\nI0918 02:05:18.013937       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Sep 18 02:17:51.593 E ns/e2e-volumelimits-6530-4364 pod/csi-hostpath-snapshotter-0 node/ci-op-m1mlgj10-86b9b-tshnf-worker-d-jtvnq container/csi-snapshotter container exited with code 255 (Error): Lost connection to CSI driver, exiting
Sep 18 02:17:51.612 E ns/e2e-volumelimits-6530-4364 pod/csi-hostpath-attacher-0 node/ci-op-m1mlgj10-86b9b-tshnf-worker-d-jtvnq container/csi-attacher container exited with code 2 (Error): 
Sep 18 02:17:51.635 E ns/e2e-volumelimits-6530-4364 pod/csi-hostpath-resizer-0 node/ci-op-m1mlgj10-86b9b-tshnf-worker-d-jtvnq container/csi-resizer container exited with code 255 (Error): Lost connection to CSI driver, exiting
Sep 18 02:17:51.673 E ns/e2e-volumelimits-6530-4364 pod/csi-hostpathplugin-0 node/ci-op-m1mlgj10-86b9b-tshnf-worker-d-jtvnq container/hostpath container exited with code 2 (Error): 
Sep 18 02:17:51.673 E ns/e2e-volumelimits-6530-4364 pod/csi-hostpathplugin-0 node/ci-op-m1mlgj10-86b9b-tshnf-worker-d-jtvnq container/node-driver-registrar container exited with code 2 (Error): 
Sep 18 02:17:51.673 E ns/e2e-volumelimits-6530-4364 pod/csi-hostpathplugin-0 node/ci-op-m1mlgj10-86b9b-tshnf-worker-d-jtvnq container/liveness-probe container exited with code 2 (Error): 
Sep 18 02:21:23.248 E ns/e2e-daemonsets-1492 pod/daemon-set-8l5lc node/ci-op-m1mlgj10-86b9b-tshnf-worker-b-r599d container/app container exited with code 2 (Error): 
Sep 18 02:29:35.426 E ns/e2e-daemonsets-9891 pod/daemon-set-28smh node/ci-op-m1mlgj10-86b9b-tshnf-worker-d-jtvnq reason/Failed (): 
Sep 18 02:48:45.322 E clusteroperator/dns changed Degraded to True: DNSDegraded: DNS default is degraded
Sep 18 02:49:16.226 E ns/openshift-sdn pod/ovs-dr2r2 node/ci-op-m1mlgj10-86b9b-tshnf-worker-b-k7hwd container/openvswitch container exited with code 1 (Error): Failed to connect to bus: No data available\nopenvswitch is running in container\n/etc/openvswitch/conf.db does not exist ... (warning).\nCreating empty database /etc/openvswitch/conf.db.\novsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process), aborting\nStarting ovsdb-server ... failed!\n
Sep 18 02:49:16.501 E ns/openshift-sdn pod/ovs-z8848 node/ci-op-m1mlgj10-86b9b-tshnf-worker-d-2lvdl container/openvswitch container exited with code 1 (Error): Failed to connect to bus: No data available\nopenvswitch is running in container\n/etc/openvswitch/conf.db does not exist ... (warning).\nCreating empty database /etc/openvswitch/conf.db.\novsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process), aborting\nStarting ovsdb-server ... failed!\n
Sep 18 02:49:22.870 E ns/openshift-sdn pod/ovs-qqwl2 node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-zlnnp container/openvswitch container exited with code 1 (Error): Failed to connect to bus: No data available\nopenvswitch is running in container\n/etc/openvswitch/conf.db does not exist ... (warning).\nCreating empty database /etc/openvswitch/conf.db.\novsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process), aborting\nStarting ovsdb-server ... failed!\n
Sep 18 02:49:29.191 E ns/openshift-sdn pod/sdn-m7php node/ci-op-m1mlgj10-86b9b-tshnf-worker-b-k7hwd container/sdn container exited with code 255 (Error):  from /config/kube-proxy-config.yaml\nI0918 02:49:15.257081    2315 feature_gate.go:243] feature gates: &{map[]}\nI0918 02:49:15.257199    2315 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0918 02:49:15.257292    2315 cmd.go:216] Watching config file /config/..2020_09_18_02_48_44.044427809/kube-proxy-config.yaml for changes\nI0918 02:49:15.288377    2315 node.go:150] Initializing SDN node "ci-op-m1mlgj10-86b9b-tshnf-worker-b-k7hwd" (10.0.32.5) of type "redhat/openshift-ovs-networkpolicy"\nI0918 02:49:15.312860    2315 cmd.go:159] Starting node networking (v0.0.0-alpha.0-205-ge30b293)\nI0918 02:49:15.312883    2315 node.go:338] Starting openshift-sdn network plugin\nI0918 02:49:15.590709    2315 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0918 02:49:15.648937    2315 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:16.155930    2315 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:16.789306    2315 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:17.576932    2315 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:18.560062    2315 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:19.788314    2315 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:21.321180    2315 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:23.235437    2315 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:25.626035    2315 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:28.611605    2315 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0918 02:49:28.611665    2315 cmd.go:111] Failed to start sdn: node SDN setup failed: timed out waiting for the condition\n
Sep 18 02:49:30.534 E ns/openshift-sdn pod/sdn-zrn6v node/ci-op-m1mlgj10-86b9b-tshnf-worker-d-2lvdl container/sdn container exited with code 255 (Error):  from /config/kube-proxy-config.yaml\nI0918 02:49:16.267736    2456 feature_gate.go:243] feature gates: &{map[]}\nI0918 02:49:16.267842    2456 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0918 02:49:16.267974    2456 cmd.go:216] Watching config file /config/..2020_09_18_02_48_45.482788324/kube-proxy-config.yaml for changes\nI0918 02:49:16.342327    2456 node.go:150] Initializing SDN node "ci-op-m1mlgj10-86b9b-tshnf-worker-d-2lvdl" (10.0.32.7) of type "redhat/openshift-ovs-networkpolicy"\nI0918 02:49:16.364276    2456 cmd.go:159] Starting node networking (v0.0.0-alpha.0-205-ge30b293)\nI0918 02:49:16.364300    2456 node.go:338] Starting openshift-sdn network plugin\nI0918 02:49:16.759569    2456 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0918 02:49:16.845628    2456 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:17.353395    2456 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:17.985509    2456 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:18.774368    2456 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:19.757853    2456 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:20.985996    2456 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:22.518521    2456 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:24.432866    2456 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:26.822424    2456 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:29.808111    2456 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0918 02:49:29.808150    2456 cmd.go:111] Failed to start sdn: node SDN setup failed: timed out waiting for the condition\n
Sep 18 02:49:31.206 E ns/openshift-sdn pod/sdn-m7php node/ci-op-m1mlgj10-86b9b-tshnf-worker-b-k7hwd container/sdn container exited with code 255 (Error): I0918 02:49:29.435647    3304 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0918 02:49:29.437634    3304 feature_gate.go:243] feature gates: &{map[]}\nI0918 02:49:29.437677    3304 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0918 02:49:29.437709    3304 cmd.go:216] Watching config file /config/..2020_09_18_02_48_44.044427809/kube-proxy-config.yaml for changes\nI0918 02:49:29.472431    3304 node.go:150] Initializing SDN node "ci-op-m1mlgj10-86b9b-tshnf-worker-b-k7hwd" (10.0.32.5) of type "redhat/openshift-ovs-networkpolicy"\nI0918 02:49:29.477681    3304 cmd.go:159] Starting node networking (v0.0.0-alpha.0-205-ge30b293)\nI0918 02:49:29.477701    3304 node.go:338] Starting openshift-sdn network plugin\nI0918 02:49:29.602300    3304 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0918 02:49:30.266691    3304 node.go:387] Starting openshift-sdn pod manager\nI0918 02:49:30.442795    3304 node.go:245] Checking default interface MTU\nF0918 02:49:30.443541    3304 healthcheck.go:99] SDN healthcheck detected unhealthy OVS server, restarting: Link not found\n
Sep 18 02:49:35.950 E ns/openshift-sdn pod/sdn-9w7kj node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-zlnnp container/sdn container exited with code 255 (Error):  from /config/kube-proxy-config.yaml\nI0918 02:49:21.840320    2347 feature_gate.go:243] feature gates: &{map[]}\nI0918 02:49:21.840390    2347 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0918 02:49:21.840442    2347 cmd.go:216] Watching config file /config/..2020_09_18_02_48_49.057957072/kube-proxy-config.yaml for changes\nI0918 02:49:21.883239    2347 node.go:150] Initializing SDN node "ci-op-m1mlgj10-86b9b-tshnf-worker-c-zlnnp" (10.0.32.6) of type "redhat/openshift-ovs-networkpolicy"\nI0918 02:49:21.910952    2347 cmd.go:159] Starting node networking (v0.0.0-alpha.0-205-ge30b293)\nI0918 02:49:21.910986    2347 node.go:338] Starting openshift-sdn network plugin\nI0918 02:49:22.188870    2347 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0918 02:49:22.248956    2347 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:22.756862    2347 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:23.389357    2347 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:24.178231    2347 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:25.162755    2347 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:26.392944    2347 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:27.926928    2347 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:29.841752    2347 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:32.232311    2347 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0918 02:49:35.220383    2347 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0918 02:49:35.220432    2347 cmd.go:111] Failed to start sdn: node SDN setup failed: timed out waiting for the condition\n
Sep 18 02:49:37.974 E ns/openshift-sdn pod/sdn-9w7kj node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-zlnnp container/sdn container exited with code 255 (Error): I0918 02:49:36.164739    3338 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0918 02:49:36.166394    3338 feature_gate.go:243] feature gates: &{map[]}\nI0918 02:49:36.166440    3338 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0918 02:49:36.166468    3338 cmd.go:216] Watching config file /config/..2020_09_18_02_48_49.057957072/kube-proxy-config.yaml for changes\nI0918 02:49:36.196365    3338 node.go:150] Initializing SDN node "ci-op-m1mlgj10-86b9b-tshnf-worker-c-zlnnp" (10.0.32.6) of type "redhat/openshift-ovs-networkpolicy"\nI0918 02:49:36.203254    3338 cmd.go:159] Starting node networking (v0.0.0-alpha.0-205-ge30b293)\nI0918 02:49:36.203279    3338 node.go:338] Starting openshift-sdn network plugin\nI0918 02:49:36.338139    3338 sdn_controller.go:139] [SDN setup] full SDN setup required (Link not found)\nI0918 02:49:37.013646    3338 node.go:387] Starting openshift-sdn pod manager\nI0918 02:49:37.211258    3338 node.go:245] Checking default interface MTU\nF0918 02:49:37.212053    3338 healthcheck.go:99] SDN healthcheck detected unhealthy OVS server, restarting: Link not found\n
Sep 18 02:49:56.312 E ns/openshift-sdn pod/sdn-m7php node/ci-op-m1mlgj10-86b9b-tshnf-worker-b-k7hwd container/kube-rbac-proxy container exited with code 1 (Error):       0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0curl: (7) Failed to connect to 172.30.0.1 port 443: No route to host\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\n  File "/usr/lib64/python3.6/json/__init__.py", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads\n    return _default_decoder.decode(s)\n  File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode\n    raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n
Sep 18 02:49:57.661 E ns/openshift-sdn pod/sdn-zrn6v node/ci-op-m1mlgj10-86b9b-tshnf-worker-d-2lvdl container/kube-rbac-proxy container exited with code 1 (Error):  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0curl: (7) Failed to connect to 172.30.0.1 port 443: No route to host\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\n  File "/usr/lib64/python3.6/json/__init__.py", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads\n    return _default_decoder.decode(s)\n  File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode\n    raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n
Sep 18 02:50:03.132 E ns/openshift-sdn pod/sdn-9w7kj node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-zlnnp container/kube-rbac-proxy container exited with code 1 (Error): 0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0curl: (7) Failed to connect to 172.30.0.1 port 443: No route to host\nTraceback (most recent call last):\n  File "<string>", line 1, in <module>\n  File "/usr/lib64/python3.6/json/__init__.py", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads\n    return _default_decoder.decode(s)\n  File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode\n    raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n
Sep 18 02:50:04.947 E ns/openshift-monitoring pod/kube-state-metrics-6f94d68747-vtlfk node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-drvsd container/kube-state-metrics container exited with code 2 (Error): 
Sep 18 02:50:04.968 E ns/openshift-kube-storage-version-migrator pod/migrator-68977b5ff7-7gxnc node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-drvsd container/migrator container exited with code 2 (Error): I0918 02:15:46.857094       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0918 02:15:46.857241       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0918 02:15:46.857248       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0918 02:15:46.857254       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0918 02:15:46.857262       1 migrator.go:18] FLAG: --kubeconfig=""\nI0918 02:15:46.857267       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0918 02:15:46.857274       1 migrator.go:18] FLAG: --log_dir=""\nI0918 02:15:46.857287       1 migrator.go:18] FLAG: --log_file=""\nI0918 02:15:46.857292       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0918 02:15:46.857296       1 migrator.go:18] FLAG: --logtostderr="true"\nI0918 02:15:46.857301       1 migrator.go:18] FLAG: --skip_headers="false"\nI0918 02:15:46.857306       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0918 02:15:46.857310       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0918 02:15:46.857315       1 migrator.go:18] FLAG: --v="2"\nI0918 02:15:46.857320       1 migrator.go:18] FLAG: --vmodule=""\nI0918 02:15:46.859504       1 reflector.go:175] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\n
Sep 18 02:50:04.995 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-drvsd container/config-reloader container exited with code 2 (Error): 2020/09/18 02:05:48 Watching directory: "/etc/alertmanager/config"\n
Sep 18 02:50:04.995 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-drvsd container/alertmanager-proxy container exited with code 2 (Error): 2020/09/18 02:05:49 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 02:05:49 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 02:05:49 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 02:05:49 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/18 02:05:49 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/09/18 02:05:49 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 02:05:49 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/09/18 02:05:49 http.go:107: HTTPS: listening on [::]:9095\nI0918 02:05:49.448256       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 18 02:50:05.014 E ns/openshift-monitoring pod/prometheus-adapter-587cc4b84c-mwkm6 node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-drvsd container/prometheus-adapter container exited with code 2 (Error): I0918 02:05:15.609823       1 adapter.go:94] successfully using in-cluster auth\nI0918 02:05:16.740212       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0918 02:05:16.740236       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0918 02:05:16.740573       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0918 02:05:16.741564       1 secure_serving.go:178] Serving securely on [::]:6443\nI0918 02:05:16.741729       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Sep 18 02:50:05.112 E ns/openshift-monitoring pod/telemeter-client-57d57c5557-5lcb8 node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-drvsd container/reload container exited with code 2 (Error): 
Sep 18 02:50:05.112 E ns/openshift-monitoring pod/telemeter-client-57d57c5557-5lcb8 node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-drvsd container/telemeter-client container exited with code 2 (Error): 
Sep 18 02:50:16.949 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-m1mlgj10-86b9b-tshnf-worker-b-r599d container/prometheus container exited with code 2 (Error): level=error ts=2020-09-18T02:50:13.528Z caller=main.go:285 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Sep 18 02:55:00.643 E ns/openshift-monitoring pod/thanos-querier-6f45f9dbf7-s4qwx node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-zlnnp container/oauth-proxy container exited with code 2 (Error): 2020/09/18 02:50:11 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/18 02:50:11 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 02:50:11 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 02:50:11 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/18 02:50:11 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/09/18 02:50:11 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/18 02:50:11 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/09/18 02:50:11 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/18 02:50:11 http.go:107: HTTPS: listening on [::]:9091\nI0918 02:50:11.070724       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 02:53:00 oauthproxy.go:785: basicauth: 10.128.0.2:51786 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 02:54:00 oauthproxy.go:785: basicauth: 10.128.0.2:55168 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 18 02:55:00.722 E ns/openshift-monitoring pod/kube-state-metrics-6f94d68747-c8tj8 node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-zlnnp container/kube-state-metrics container exited with code 2 (Error): 
Sep 18 03:04:59.427 E ns/e2e-test-ldap-group-sync-kb8sm pod/groupsync node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-zlnnp container/groupsync init container exited with code 137 (Error): 
Sep 18 03:04:59.427 E ns/e2e-test-ldap-group-sync-kb8sm pod/groupsync node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-zlnnp reason/Failed (): 
Sep 18 03:04:59.427 E ns/e2e-test-ldap-group-sync-kb8sm pod/groupsync node/ci-op-m1mlgj10-86b9b-tshnf-worker-c-zlnnp container/groupsync container exited with code 137 (Error): 

				
				Click to see stdout/stderrfrom junit_e2e_20200918-030734.xml

Find managernI0918 mentions in log files


Show 104 Passed Tests

Show 38 Skipped Tests