ResultFAILURE
Tests 3 failed / 120 succeeded
Started2020-09-20 00:15
Elapsed1h35m
Work namespaceci-op-h395pzip
pod5c03c2c2-fad6-11ea-a1fd-0a580a800db2
repos{u'openshift/release': u'master'}
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 53m51s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
84 error level events were detected during this test run:

Sep 20 00:45:04.118 E ns/openshift-controller-manager pod/controller-manager-mqwvx node/ci-op-h395pzip-511d7-ztdvq-master-0 container/controller-manager container exited with code 137 (Error): I0920 00:28:52.127558       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (v0.0.0-alpha.0-134-g36e439c1)\nI0920 00:28:52.130422       1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a24718f9dbc49b225f0885501722f1c2330fdb9e317aa4740437717a0f57605d"\nI0920 00:28:52.130442       1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3453189108554c3fe0ee1a49c2d563a8a0dd59a0e736333a369ffe8af7b42caa"\nI0920 00:28:52.130545       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0920 00:28:52.131383       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Sep 20 00:47:58.249 E ns/e2e-disruption-1702 pod/rs-6t5xm node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/busybox container exited with code 137 (Error): 
Sep 20 00:48:20.076 E ns/e2e-daemonsets-4177 pod/daemon-set-hj4bb node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/app container exited with code 2 (Error): 
Sep 20 00:48:21.332 E ns/e2e-daemonsets-4177 pod/daemon-set-t6jtf node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/app container exited with code 2 (Error): 
Sep 20 00:48:53.895 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-h395pzip-511d7-ztdvq-master-2 node/ci-op-h395pzip-511d7-ztdvq-master-2 container/kube-scheduler container exited with code 255 (Error): 6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=18609&timeout=9m40s&timeoutSeconds=580&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:48:53.127478       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=18609&timeout=8m25s&timeoutSeconds=505&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:48:53.128104       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=24173&timeout=6m44s&timeoutSeconds=404&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:48:53.130573       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=18604&timeout=8m28s&timeoutSeconds=508&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:48:53.130610       1 reflector.go:382] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=25151&timeoutSeconds=509&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:48:53.131348       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=18604&timeout=6m48s&timeoutSeconds=408&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0920 00:48:53.485847       1 leaderelection.go:277] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0920 00:48:53.485882       1 server.go:244] leaderelection lost\n
Sep 20 00:49:19.001 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-h395pzip-511d7-ztdvq-master-2 node/ci-op-h395pzip-511d7-ztdvq-master-2 container/setup init container exited with code 124 (Error): ................................................................................
Sep 20 00:49:23.025 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-h395pzip-511d7-ztdvq-master-2 node/ci-op-h395pzip-511d7-ztdvq-master-2 container/cluster-policy-controller container exited with code 255 (Error): 1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Namespace: Get https://localhost:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=25292&timeout=6m9s&timeoutSeconds=369&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:49:22.151061       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=18604&timeout=7m55s&timeoutSeconds=475&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:49:22.152126       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=21533&timeout=8m14s&timeoutSeconds=494&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:49:22.155025       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=24173&timeout=8m24s&timeoutSeconds=504&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:49:22.168197       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.RoleBinding: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=25288&timeout=5m8s&timeoutSeconds=308&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:49:22.169245       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=18606&timeout=5m54s&timeoutSeconds=354&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0920 00:49:22.743962       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0920 00:49:22.744026       1 policy_controller.go:94] leaderelection lost\n
Sep 20 00:51:04.456 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-h395pzip-511d7-ztdvq-master-2 node/ci-op-h395pzip-511d7-ztdvq-master-2 container/setup init container exited with code 124 (Error): ................................................................................
Sep 20 00:54:29.927 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-h395pzip-511d7-ztdvq-master-1 node/ci-op-h395pzip-511d7-ztdvq-master-1 container/kube-scheduler container exited with code 255 (Error): timeout=8m1s&timeoutSeconds=481&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:54:28.893782       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=20835&timeout=8m52s&timeoutSeconds=532&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:54:28.894809       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=27426&timeout=9m46s&timeoutSeconds=586&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:54:28.904521       1 reflector.go:382] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=27071&timeout=5m52s&timeoutSeconds=352&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:54:28.905230       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=20835&timeout=6m11s&timeoutSeconds=371&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:54:28.913338       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=24629&timeout=8m15s&timeoutSeconds=495&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0920 00:54:29.659492       1 leaderelection.go:277] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0920 00:54:29.659527       1 server.go:244] leaderelection lost\n
Sep 20 00:54:30.960 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-h395pzip-511d7-ztdvq-master-1 node/ci-op-h395pzip-511d7-ztdvq-master-1 container/kube-controller-manager container exited with code 255 (Error): ceVersion=21513&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:54:30.048067       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/monitoring.coreos.com/v1/prometheusrules?allowWatchBookmarks=true&resourceVersion=27580&timeout=5m8s&timeoutSeconds=308&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0920 00:54:30.049294       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0920 00:54:30.049400       1 controllermanager.go:291] leaderelection lost\nI0920 00:54:30.049404       1 daemon_controller.go:311] Shutting down daemon sets controller\nI0920 00:54:30.049412       1 job_controller.go:157] Shutting down job controller\nI0920 00:54:30.121754       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (23h23m34.935937116s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0920 00:54:30.049417       1 endpoints_controller.go:199] Shutting down endpoint controller\nI0920 00:54:30.121792       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (23h23m34.935937116s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0920 00:54:30.121829       1 reflector.go:181] Stopping reflector *v1beta1.EndpointSlice (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0920 00:54:30.067520       1 garbagecollector.go:146] Shutting down garbage collector controller\nI0920 00:54:30.121875       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (23h12m18.572128099s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0920 00:54:30.067547       1 attach_detach_controller.go:387] Shutting down attach detach controller\nI0920 00:54:30.067567       1 resource_quota_controller.go:291] Shutting down resource quota controller\nI0920 00:54:30.067632       1 pv_controller_base.go:311] Shutting down persistent volume controller\n
Sep 20 00:54:55.055 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-h395pzip-511d7-ztdvq-master-1 node/ci-op-h395pzip-511d7-ztdvq-master-1 container/setup init container exited with code 124 (Error): ................................................................................
Sep 20 00:55:04.097 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-h395pzip-511d7-ztdvq-master-1 node/ci-op-h395pzip-511d7-ztdvq-master-1 container/cluster-policy-controller container exited with code 255 (Error):  runtime/asm_amd64.s:1357: Failed to watch *v1.RoleBinding: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=27474&timeout=6m5s&timeoutSeconds=365&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:55:03.090019       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Job: Get https://localhost:6443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=21968&timeout=7m22s&timeoutSeconds=442&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:55:03.090045       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Deployment: Get https://localhost:6443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=27716&timeout=9m39s&timeoutSeconds=579&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:55:03.090203       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ServiceAccount: Get https://localhost:6443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=27505&timeout=9m48s&timeoutSeconds=588&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:55:03.090552       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ImageStream: Get https://localhost:6443/apis/image.openshift.io/v1/imagestreams?allowWatchBookmarks=true&resourceVersion=27440&timeout=8m5s&timeoutSeconds=485&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0920 00:55:03.090574       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1beta1.EndpointSlice: Get https://localhost:6443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=27387&timeout=8m18s&timeoutSeconds=498&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0920 00:55:03.339282       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0920 00:55:03.339446       1 policy_controller.go:94] leaderelection lost\n
Sep 20 00:56:40.429 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-h395pzip-511d7-ztdvq-master-1 node/ci-op-h395pzip-511d7-ztdvq-master-1 container/setup init container exited with code 124 (Error): ................................................................................
Sep 20 00:57:38.946 E ns/openshift-monitoring pod/prometheus-adapter-bcf4f75f6-zrtg8 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/prometheus-adapter container exited with code 2 (Error): I0920 00:39:20.961210       1 adapter.go:94] successfully using in-cluster auth\nI0920 00:39:21.492897       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0920 00:39:21.492933       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0920 00:39:21.493011       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0920 00:39:21.493692       1 secure_serving.go:178] Serving securely on [::]:6443\nI0920 00:39:21.493814       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Sep 20 00:57:39.029 E ns/openshift-monitoring pod/grafana-689d8d5766-96v6q node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/grafana container exited with code 1 (Error): 
Sep 20 00:57:39.029 E ns/openshift-monitoring pod/grafana-689d8d5766-96v6q node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/grafana-proxy container exited with code 2 (Error): 
Sep 20 00:57:39.057 E ns/openshift-monitoring pod/thanos-querier-6b95d55b58-2mfms node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/oauth-proxy container exited with code 2 (Error): zation header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:41:33 oauthproxy.go:782: requestauth: 10.130.0.2:33262 [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n2020/09/20 00:41:34 oauthproxy.go:774: basicauth: 10.130.0.2:33308 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:41:34 oauthproxy.go:782: requestauth: 10.130.0.2:33308 [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n2020/09/20 00:41:35 oauthproxy.go:774: basicauth: 10.130.0.2:33332 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:41:35 oauthproxy.go:782: requestauth: 10.130.0.2:33332 [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n2020/09/20 00:41:44 oauthproxy.go:774: basicauth: 10.130.0.2:33532 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:41:45 oauthproxy.go:782: requestauth: 10.130.0.2:33532 [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n2020/09/20 00:45:32 oauthproxy.go:774: basicauth: 10.130.0.2:40834 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:45:40 oauthproxy.go:774: basicauth: 10.130.0.2:41128 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:47:32 oauthproxy.go:774: basicauth: 10.130.0.2:43792 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:48:32 oauthproxy.go:774: basicauth: 10.130.0.2:45368 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:52:32 oauthproxy.go:774: basicauth: 10.130.0.2:35420 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:55:32 oauthproxy.go:774: basicauth: 10.130.0.2:42048 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 20 00:57:57.724 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T00:57:52.390Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T00:57:52.392Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-20T00:57:52.397Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-20T00:57:52.398Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T00:57:52.398Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-20T00:57:52.398Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T00:57:52.398Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T00:57:52.398Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T00:57:52.398Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T00:57:52.398Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T00:57:52.398Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T00:57:52.398Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T00:57:52.398Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T00:57:52.398Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T00:57:52.399Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T00:57:52.399Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20
Sep 20 00:58:38.827 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-h395pzip-511d7-ztdvq-master-1 node/ci-op-h395pzip-511d7-ztdvq-master-1 container/setup init container exited with code 124 (Error): ................................................................................
Sep 20 01:01:06.351 E ns/openshift-monitoring pod/thanos-querier-6b95d55b58-gc4rt node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/oauth-proxy container exited with code 2 (Error): 2020/09/20 00:57:46 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/20 00:57:46 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 00:57:46 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 00:57:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/20 00:57:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 00:57:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/20 00:57:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 00:57:46 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/20 00:57:46 http.go:107: HTTPS: listening on [::]:9091\nI0920 00:57:46.495926       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/20 00:59:32 oauthproxy.go:774: basicauth: 10.130.0.2:48126 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 20 01:01:06.368 E ns/openshift-kube-storage-version-migrator pod/migrator-694cbcdfd9-bfslv node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/migrator container exited with code 2 (Error): 
Sep 20 01:01:07.401 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/20 00:57:56 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 20 01:01:07.401 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/prometheus-proxy container exited with code 2 (Error): 2020/09/20 00:57:56 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/20 00:57:56 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 00:57:56 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 00:57:57 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/20 00:57:57 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 00:57:57 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/20 00:57:57 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 00:57:57 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0920 00:57:57.006673       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/20 00:57:57 http.go:107: HTTPS: listening on [::]:9091\n
Sep 20 01:01:07.401 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-20T00:57:56.413055844Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.38.1'."\nlevel=error ts=2020-09-20T00:57:56.414466352Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-20T00:58:01.548617204Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-20T00:58:01.548725618Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Sep 20 01:01:07.416 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-7895b89dfd-2995f node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/operator container exited with code 255 (Error):  Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False (""),Available changed from False to True ("")\nI0920 00:57:45.732511       1 operator.go:148] Finished syncing operator at 29.69585ms\nI0920 00:59:41.859110       1 operator.go:146] Starting syncing operator at 2020-09-20 00:59:41.859104255 +0000 UTC m=+1552.889535671\nI0920 00:59:41.881358       1 operator.go:148] Finished syncing operator at 22.24697ms\nI0920 00:59:41.882251       1 operator.go:146] Starting syncing operator at 2020-09-20 00:59:41.882246712 +0000 UTC m=+1552.912678139\nI0920 00:59:41.904004       1 operator.go:148] Finished syncing operator at 21.75127ms\nI0920 00:59:42.648841       1 operator.go:146] Starting syncing operator at 2020-09-20 00:59:42.648832636 +0000 UTC m=+1553.679264066\nI0920 00:59:42.669399       1 operator.go:148] Finished syncing operator at 20.550024ms\nI0920 00:59:42.747899       1 operator.go:146] Starting syncing operator at 2020-09-20 00:59:42.747890993 +0000 UTC m=+1553.778322418\nI0920 00:59:42.768284       1 operator.go:148] Finished syncing operator at 20.38513ms\nI0920 00:59:42.846806       1 operator.go:146] Starting syncing operator at 2020-09-20 00:59:42.846798078 +0000 UTC m=+1553.877229513\nI0920 00:59:42.869059       1 operator.go:148] Finished syncing operator at 22.253601ms\nI0920 00:59:42.948194       1 operator.go:146] Starting syncing operator at 2020-09-20 00:59:42.948186459 +0000 UTC m=+1553.978617874\nI0920 00:59:43.470402       1 operator.go:148] Finished syncing operator at 522.208819ms\nI0920 01:01:05.498780       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0920 01:01:05.499113       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0920 01:01:05.499125       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0920 01:01:05.499134       1 logging_controller.go:93] Shutting down LogLevelController\nF0920 01:01:05.499203       1 builder.go:243] stopped\n
Sep 20 01:01:20.472 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T01:01:18.704Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T01:01:18.707Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-20T01:01:18.709Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-20T01:01:18.709Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T01:01:18.709Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-20T01:01:18.709Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T01:01:18.710Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T01:01:18.710Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T01:01:18.710Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T01:01:18.710Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T01:01:18.710Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T01:01:18.710Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T01:01:18.710Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T01:01:18.710Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T01:01:18.712Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T01:01:18.712Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20
Sep 20 01:02:09.654 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/config-reloader container exited with code 2 (Error): 2020/09/20 01:01:18 Watching directory: "/etc/alertmanager/config"\n
Sep 20 01:02:09.654 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/alertmanager-proxy container exited with code 2 (Error): 2020/09/20 01:01:18 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 01:01:18 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 01:01:18 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 01:01:18 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/20 01:01:18 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 01:01:18 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 01:01:18 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 01:01:18 http.go:107: HTTPS: listening on [::]:9095\nI0920 01:01:18.709764       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 20 01:02:15.225 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T01:02:13.370Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T01:02:13.376Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-20T01:02:13.376Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-20T01:02:13.377Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T01:02:13.377Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-20T01:02:13.377Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T01:02:13.377Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T01:02:13.377Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T01:02:13.377Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T01:02:13.377Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T01:02:13.377Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T01:02:13.377Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T01:02:13.377Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T01:02:13.377Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T01:02:13.380Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T01:02:13.380Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20
Sep 20 01:09:41.576 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/config-reloader container exited with code 2 (Error): 2020/09/20 01:07:59 Watching directory: "/etc/alertmanager/config"\n
Sep 20 01:09:41.576 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/alertmanager-proxy container exited with code 2 (Error): 2020/09/20 01:07:59 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 01:07:59 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 01:07:59 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 01:07:59 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/20 01:07:59 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 01:07:59 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 01:07:59 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 01:07:59 http.go:107: HTTPS: listening on [::]:9095\nI0920 01:07:59.540884       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 20 01:09:42.342 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/config-reloader container exited with code 2 (Error): 2020/09/20 00:34:01 Watching directory: "/etc/alertmanager/config"\n
Sep 20 01:09:42.342 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/alertmanager-proxy container exited with code 2 (Error): 2020/09/20 00:34:01 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 00:34:01 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 00:34:01 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 00:34:01 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/20 00:34:01 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 00:34:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 00:34:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 00:34:01 http.go:107: HTTPS: listening on [::]:9095\nI0920 00:34:01.298054       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 20 01:09:42.376 E ns/openshift-monitoring pod/grafana-689d8d5766-qkccp node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/grafana container exited with code 1 (Error): 
Sep 20 01:09:42.376 E ns/openshift-monitoring pod/grafana-689d8d5766-qkccp node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/grafana-proxy container exited with code 2 (Error): 
Sep 20 01:09:42.395 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-bd96cf5c4-hksw6 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/snapshot-controller container exited with code 2 (Error): 
Sep 20 01:09:42.428 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/config-reloader container exited with code 2 (Error): 2020/09/20 00:34:01 Watching directory: "/etc/alertmanager/config"\n
Sep 20 01:09:42.428 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/alertmanager-proxy container exited with code 2 (Error): 2020/09/20 00:34:01 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 00:34:01 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 00:34:01 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 00:34:01 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/20 00:34:01 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 00:34:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 00:34:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 00:34:01 http.go:107: HTTPS: listening on [::]:9095\nI0920 00:34:01.296141       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 20 01:09:42.463 E ns/openshift-monitoring pod/thanos-querier-6b95d55b58-fkml6 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/oauth-proxy container exited with code 2 (Error): roxy.go:782: requestauth: 10.130.0.2:35924 [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n2020/09/20 00:44:32 oauthproxy.go:774: basicauth: 10.130.0.2:38322 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:46:32 oauthproxy.go:774: basicauth: 10.130.0.2:42338 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:49:32 oauthproxy.go:774: basicauth: 10.130.0.2:56000 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:50:32 oauthproxy.go:774: basicauth: 10.130.0.2:58436 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:51:32 oauthproxy.go:774: basicauth: 10.130.0.2:60868 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:53:32 oauthproxy.go:774: basicauth: 10.130.0.2:37854 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:54:32 oauthproxy.go:774: basicauth: 10.130.0.2:40348 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:56:32 oauthproxy.go:774: basicauth: 10.130.0.2:43606 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:57:32 oauthproxy.go:774: basicauth: 10.130.0.2:44970 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 00:58:32 oauthproxy.go:774: basicauth: 10.130.0.2:46454 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:00:32 oauthproxy.go:774: basicauth: 10.130.0.2:49456 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:06:32 oauthproxy.go:774: basicauth: 10.130.0.2:58206 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:09:32 oauthproxy.go:774: basicauth: 10.130.0.2:34526 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 20 01:09:52.396 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T01:09:49.602Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T01:09:49.605Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-20T01:09:49.605Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-20T01:09:49.605Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T01:09:49.605Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-20T01:09:49.605Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T01:09:49.605Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T01:09:49.605Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T01:09:49.605Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T01:09:49.605Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T01:09:49.605Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T01:09:49.605Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T01:09:49.606Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T01:09:49.606Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T01:09:49.617Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T01:09:49.617Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20
Sep 20 01:16:23.910 E ns/openshift-monitoring pod/prometheus-adapter-bcf4f75f6-jnfdl node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/prometheus-adapter container exited with code 2 (Error): I0920 01:01:08.801671       1 adapter.go:94] successfully using in-cluster auth\nI0920 01:01:09.109308       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0920 01:01:09.109338       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0920 01:01:09.109666       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0920 01:01:09.110688       1 secure_serving.go:178] Serving securely on [::]:6443\nI0920 01:01:09.110823       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Sep 20 01:16:23.936 E ns/openshift-kube-storage-version-migrator pod/migrator-694cbcdfd9-ln92s node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/migrator container exited with code 2 (Error): 
Sep 20 01:16:23.956 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-bd96cf5c4-kvhhk node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/snapshot-controller container exited with code 2 (Error): 
Sep 20 01:16:23.989 E ns/openshift-monitoring pod/openshift-state-metrics-85b46b6f6c-7ccw4 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/openshift-state-metrics container exited with code 2 (Error): 
Sep 20 01:16:24.017 E ns/openshift-monitoring pod/grafana-689d8d5766-hqftj node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/grafana container exited with code 1 (Error): 
Sep 20 01:16:24.017 E ns/openshift-monitoring pod/grafana-689d8d5766-hqftj node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/grafana-proxy container exited with code 2 (Error): 
Sep 20 01:16:24.042 E ns/openshift-monitoring pod/kube-state-metrics-cf7bc857f-m4tfg node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/kube-state-metrics container exited with code 2 (Error): 
Sep 20 01:16:24.069 E ns/openshift-monitoring pod/telemeter-client-5c7cb44585-b9vw5 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/reload container exited with code 2 (Error): 
Sep 20 01:16:24.069 E ns/openshift-monitoring pod/telemeter-client-5c7cb44585-b9vw5 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/telemeter-client container exited with code 2 (Error): 
Sep 20 01:16:24.096 E ns/openshift-monitoring pod/thanos-querier-6b95d55b58-j8s2h node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/oauth-proxy container exited with code 2 (Error): 2020/09/20 01:09:43 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/20 01:09:43 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 01:09:43 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 01:09:43 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/20 01:09:43 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 01:09:43 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/20 01:09:43 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 01:09:43 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/20 01:09:43 http.go:107: HTTPS: listening on [::]:9091\nI0920 01:09:43.242567       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/20 01:10:32 oauthproxy.go:774: basicauth: 10.130.0.2:35972 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:14:32 oauthproxy.go:774: basicauth: 10.130.0.2:41602 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:15:32 oauthproxy.go:774: basicauth: 10.130.0.2:43140 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 20 01:16:24.132 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/config-reloader container exited with code 2 (Error): 2020/09/20 01:09:50 Watching directory: "/etc/alertmanager/config"\n
Sep 20 01:16:24.132 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/alertmanager-proxy container exited with code 2 (Error): 2020/09/20 01:09:51 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 01:09:51 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 01:09:51 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 01:09:51 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/20 01:09:51 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 01:09:51 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 01:09:51 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 01:09:51 http.go:107: HTTPS: listening on [::]:9095\nI0920 01:09:51.243767       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 20 01:16:24.178 E ns/openshift-marketplace pod/community-operators-5c4cc6c874-ldw77 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/community-operators container exited with code 2 (Error): 
Sep 20 01:16:24.211 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/20 01:02:13 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 20 01:16:24.211 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/prometheus-proxy container exited with code 2 (Error): 2020/09/20 01:02:14 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/20 01:02:14 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 01:02:14 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 01:02:14 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/20 01:02:14 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 01:02:14 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/20 01:02:14 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 01:02:14 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0920 01:02:14.143960       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/20 01:02:14 http.go:107: HTTPS: listening on [::]:9091\n2020/09/20 01:06:17 oauthproxy.go:774: basicauth: 10.130.0.22:40376 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:06:17 oauthproxy.go:774: basicauth: 10.130.0.22:40376 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:10:16 oauthproxy.go:774: basicauth: 10.129.2.87:60116 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:14:46 oauthproxy.go:774: basicauth: 10.129.2.87:38788 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 20 01:16:24.211 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-20T01:02:13.52504628Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.38.1'."\nlevel=error ts=2020-09-20T01:02:13.526699766Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-20T01:02:18.663071312Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-20T01:02:18.663152775Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Sep 20 01:16:24.238 E ns/openshift-marketplace pod/certified-operators-7fd784bb4f-b8mcc node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/certified-operators container exited with code 2 (Error): 
Sep 20 01:16:24.314 E ns/openshift-monitoring pod/prometheus-adapter-bcf4f75f6-vqc87 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/prometheus-adapter container exited with code 2 (Error): I0920 01:09:42.535813       1 adapter.go:94] successfully using in-cluster auth\nI0920 01:09:42.970473       1 secure_serving.go:178] Serving securely on [::]:6443\nI0920 01:09:42.970877       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0920 01:09:42.970899       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0920 01:09:42.970911       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nI0920 01:09:42.972839       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\n
Sep 20 01:16:25.158 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-7895b89dfd-h84c8 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/operator container exited with code 255 (Error): e","type":"Progressing"},{"lastTransitionTime":"2020-09-20T01:16:21Z","message":"Available: Waiting for Deployment to deploy csi-snapshot-controller pods","reason":"_AsExpected","status":"False","type":"Available"},{"lastTransitionTime":"2020-09-20T00:33:53Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0920 01:16:21.677020       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-cluster-storage-operator", Name:"csi-snapshot-controller-operator", UID:"e2a19c0b-af7b-44dc-b2d0-414ac79b5466", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods"),Available changed from True to False ("Available: Waiting for Deployment to deploy csi-snapshot-controller pods")\nI0920 01:16:21.721908       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-cluster-storage-operator", Name:"csi-snapshot-controller-operator", UID:"e2a19c0b-af7b-44dc-b2d0-414ac79b5466", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods"),Available changed from True to False ("Available: Waiting for Deployment to deploy csi-snapshot-controller pods")\nI0920 01:16:21.746970       1 operator.go:148] Finished syncing operator at 92.91713ms\nI0920 01:16:21.832790       1 operator.go:146] Starting syncing operator at 2020-09-20 01:16:21.832781911 +0000 UTC m=+906.758124036\nI0920 01:16:21.884010       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0920 01:16:21.884198       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nF0920 01:16:21.884258       1 builder.go:210] server exited\n
Sep 20 01:16:25.222 E ns/openshift-marketplace pod/redhat-operators-777b57dc4c-t9htf node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/redhat-operators container exited with code 2 (Error): 
Sep 20 01:16:25.308 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/20 01:09:50 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 20 01:16:25.308 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/prometheus-proxy container exited with code 2 (Error): 2020/09/20 01:09:51 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/20 01:09:51 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 01:09:51 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 01:09:51 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/20 01:09:51 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 01:09:51 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/20 01:09:51 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 01:09:51 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/20 01:09:51 http.go:107: HTTPS: listening on [::]:9091\nI0920 01:09:51.206924       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 20 01:16:25.308 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-20T01:09:49.762179099Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.38.1'."\nlevel=error ts=2020-09-20T01:09:49.763427911Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-20T01:09:54.880094874Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-20T01:09:54.880149254Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Sep 20 01:16:25.351 E ns/openshift-monitoring pod/thanos-querier-6b95d55b58-z2wb2 node/ci-op-h395pzip-511d7-ztdvq-worker-8p2lb container/oauth-proxy container exited with code 2 (Error): > "^/metrics"\n2020/09/20 01:01:08 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/20 01:01:08 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 01:01:08 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/20 01:01:08 http.go:107: HTTPS: listening on [::]:9091\nI0920 01:01:08.328029       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/20 01:01:32 oauthproxy.go:774: basicauth: 10.130.0.2:51026 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:02:32 oauthproxy.go:774: basicauth: 10.130.0.2:52442 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:03:32 oauthproxy.go:774: basicauth: 10.130.0.2:53792 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:04:32 oauthproxy.go:774: basicauth: 10.130.0.2:55190 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:05:32 oauthproxy.go:774: basicauth: 10.130.0.2:56810 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:07:32 oauthproxy.go:774: basicauth: 10.130.0.2:59666 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:08:32 oauthproxy.go:774: basicauth: 10.130.0.2:33062 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:11:32 oauthproxy.go:774: basicauth: 10.130.0.2:37368 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:12:32 oauthproxy.go:774: basicauth: 10.130.0.2:38746 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:13:32 oauthproxy.go:774: basicauth: 10.130.0.2:40134 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 20 01:16:37.989 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T01:16:36.303Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T01:16:36.305Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-20T01:16:36.306Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-20T01:16:36.306Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T01:16:36.306Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-20T01:16:36.306Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T01:16:36.306Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T01:16:36.306Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T01:16:36.306Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T01:16:36.306Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T01:16:36.306Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T01:16:36.306Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T01:16:36.306Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T01:16:36.306Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T01:16:36.308Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T01:16:36.308Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20
Sep 20 01:16:38.211 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-h395pzip-511d7-ztdvq-worker-c7bdv container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T01:16:36.843Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T01:16:36.845Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-20T01:16:36.845Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-20T01:16:36.846Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T01:16:36.846Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-20T01:16:36.846Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T01:16:36.846Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T01:16:36.846Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T01:16:36.846Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T01:16:36.846Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T01:16:36.846Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T01:16:36.846Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T01:16:36.846Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T01:16:36.846Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T01:16:36.848Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T01:16:36.848Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20
Sep 20 01:20:24.170 E ns/e2e-daemonsets-3258 pod/daemon-set-2rg5t node/ci-op-h395pzip-511d7-ztdvq-worker-s4nzt reason/Failed (): 
Sep 20 01:22:26.810 E ns/e2e-volumelimits-9110 pod/csi-hostpathplugin-0 node/ci-op-h395pzip-511d7-ztdvq-worker-s4nzt container/hostpath container exited with code 2 (Error): 
Sep 20 01:22:26.810 E ns/e2e-volumelimits-9110 pod/csi-hostpathplugin-0 node/ci-op-h395pzip-511d7-ztdvq-worker-s4nzt container/liveness-probe container exited with code 2 (Error): 
Sep 20 01:22:26.810 E ns/e2e-volumelimits-9110 pod/csi-hostpathplugin-0 node/ci-op-h395pzip-511d7-ztdvq-worker-s4nzt container/node-driver-registrar container exited with code 2 (Error): 
Sep 20 01:22:26.821 E ns/e2e-volumelimits-9110 pod/csi-hostpath-resizer-0 node/ci-op-h395pzip-511d7-ztdvq-worker-s4nzt container/csi-resizer container exited with code 255 (Error): Lost connection to CSI driver, exiting
Sep 20 01:22:26.914 E ns/e2e-volumelimits-9110 pod/csi-hostpath-attacher-0 node/ci-op-h395pzip-511d7-ztdvq-worker-s4nzt container/csi-attacher container exited with code 2 (Error): 
Sep 20 01:23:22.855 E ns/e2e-test-topology-manager-f95l7 pod/test-7rn87 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/test-0 container exited with code 137 (Error): 
Sep 20 01:24:31.022 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-bd96cf5c4-nmnl6 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/snapshot-controller container exited with code 2 (Error): 
Sep 20 01:24:31.047 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/20 01:16:36 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 20 01:24:31.047 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/prometheus-proxy container exited with code 2 (Error): 2020/09/20 01:16:37 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/20 01:16:37 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 01:16:37 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 01:16:37 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/20 01:16:37 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 01:16:37 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/20 01:16:37 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 01:16:37 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0920 01:16:37.070525       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/20 01:16:37 http.go:107: HTTPS: listening on [::]:9091\n2020/09/20 01:22:04 oauthproxy.go:774: basicauth: 10.130.0.22:35552 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:22:04 oauthproxy.go:774: basicauth: 10.130.0.22:35552 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 20 01:24:31.047 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-20T01:16:36.433496234Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.38.1'."\nlevel=error ts=2020-09-20T01:16:36.43531414Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-20T01:16:41.545544608Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-20T01:16:41.545621313Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Sep 20 01:24:31.058 E ns/openshift-monitoring pod/prometheus-adapter-bcf4f75f6-cdgnl node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/prometheus-adapter container exited with code 2 (Error): I0920 01:16:23.415528       1 adapter.go:94] successfully using in-cluster auth\nI0920 01:16:33.473432       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0920 01:16:33.473432       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0920 01:16:33.473627       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0920 01:16:33.474270       1 secure_serving.go:178] Serving securely on [::]:6443\nI0920 01:16:33.474348       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Sep 20 01:24:31.079 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/config-reloader container exited with code 2 (Error): 2020/09/20 01:16:37 Watching directory: "/etc/alertmanager/config"\n
Sep 20 01:24:31.079 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/alertmanager-proxy container exited with code 2 (Error): 2020/09/20 01:16:37 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 01:16:37 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 01:16:37 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 01:16:37 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/20 01:16:37 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 01:16:37 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/20 01:16:37 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 01:16:37 http.go:107: HTTPS: listening on [::]:9095\nI0920 01:16:37.348330       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 20 01:24:31.103 E ns/openshift-monitoring pod/thanos-querier-6b95d55b58-x5p95 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/oauth-proxy container exited with code 2 (Error): 2020/09/20 01:16:24 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/20 01:16:24 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/20 01:16:24 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/20 01:16:24 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/20 01:16:24 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/20 01:16:24 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/20 01:16:24 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/20 01:16:24 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/20 01:16:24 http.go:107: HTTPS: listening on [::]:9091\nI0920 01:16:24.207083       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/20 01:16:32 oauthproxy.go:774: basicauth: 10.130.0.2:44912 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:19:32 oauthproxy.go:774: basicauth: 10.130.0.2:49214 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/20 01:21:32 oauthproxy.go:774: basicauth: 10.130.0.2:52116 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 20 01:24:40.069 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-20T01:24:38.815Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-20T01:24:38.818Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-20T01:24:38.818Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-20T01:24:38.819Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-20T01:24:38.819Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-20T01:24:38.819Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-20T01:24:38.819Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-20T01:24:38.819Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-20T01:24:38.819Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-20T01:24:38.819Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-20T01:24:38.819Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-20T01:24:38.819Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-20T01:24:38.819Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-20T01:24:38.819Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-20T01:24:38.821Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-20T01:24:38.821Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-20
Sep 20 01:33:49.016 E ns/openshift-marketplace pod/opsrctestlabel-858d48dd7f-d9gmg node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/opsrctestlabel container exited with code 2 (Error): 
Sep 20 01:37:17.460 E ns/e2e-test-topology-manager-rvzqt pod/test-l98z4 node/ci-op-h395pzip-511d7-ztdvq-worker-gvgxw container/test-0 container exited with code 137 (Error):