ResultSUCCESS
Tests 1 failed / 57 succeeded
Started2019-11-07 14:05
Elapsed1h38m
Work namespaceci-op-g1kslvlc
pod4.3.0-0.ci-2019-11-07-140257-gcp-serial

Test Failures


openshift-tests Monitor cluster while tests execute 56m54s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
26 error level events were detected during this test run:

Nov 07 14:36:08.409 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op--k9nxs-m-2.c.openshift-gce-devel-ci.internal node/ci-op--k9nxs-m-2.c.openshift-gce-devel-ci.internal container=kube-controller-manager-6 container exited with code 255 (Error):  ended with: too old resource version: 10774 (17518)\nW1107 14:36:07.127763       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 13484 (17518)\nW1107 14:36:07.128000       1 reflector.go:299] k8s.io/client-go/metadata/metadatainformer/informer.go:89: watch of *v1.PartialObjectMetadata ended with: too old resource version: 16929 (17523)\nW1107 14:36:07.128126       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 17496 (17514)\nW1107 14:36:07.128210       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.NetworkPolicy ended with: too old resource version: 10774 (17518)\nW1107 14:36:07.128265       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: too old resource version: 10774 (17517)\nW1107 14:36:07.128323       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: too old resource version: 10774 (17520)\nW1107 14:36:07.128389       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 10774 (17514)\nW1107 14:36:07.131901       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Deployment ended with: too old resource version: 17436 (17521)\nW1107 14:36:07.190508       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ValidatingWebhookConfiguration ended with: too old resource version: 17174 (17521)\nW1107 14:36:07.200459       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSINode ended with: too old resource version: 10774 (17520)\nI1107 14:36:07.278868       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded\nF1107 14:36:07.279048       1 controllermanager.go:291] leaderelection lost\n
Nov 07 14:37:18.472 E ns/openshift-machine-config-operator pod/machine-config-daemon-d559t node/ci-op--k9nxs-w-b-wvsw7.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 14:37:18.472 E ns/openshift-machine-config-operator pod/machine-config-daemon-d559t node/ci-op--k9nxs-w-b-wvsw7.c.openshift-gce-devel-ci.internal container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:00:27.039 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 07 15:00:35.261 E ns/openshift-monitoring pod/kube-state-metrics-66547bf9f4-qnzmd node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=kube-state-metrics container exited with code 2 (Error): 
Nov 07 15:00:35.327 E ns/openshift-monitoring pod/thanos-querier-54865dd768-kkkhm node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 2 (Error): 2019/11/07 14:37:24 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/07 14:37:24 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 14:37:24 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 14:37:24 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/07 14:37:24 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 14:37:24 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/07 14:37:24 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 14:37:24 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/07 14:37:24 http.go:96: HTTPS: listening on [::]:9091\n
Nov 07 15:00:35.507 E ns/openshift-monitoring pod/grafana-64544dc5bf-fg6wz node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=grafana-proxy container exited with code 2 (Error): 
Nov 07 15:00:36.540 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:00:36.540 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:00:36.540 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:00:36.540 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:00:36.540 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:00:36.540 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:00:36.540 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:00:36.586 E ns/openshift-marketplace pod/certified-operators-84f957bcd9-2d8fj node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:00:36.669 E ns/openshift-monitoring pod/prometheus-adapter-69666497c-v5gxd node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I1107 14:29:51.605207       1 adapter.go:93] successfully using in-cluster auth\nI1107 14:29:52.469821       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 07 15:00:36.696 E ns/openshift-marketplace pod/community-operators-69c4c77c58-89l9k node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:00:36.710 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/07 14:31:03 Watching directory: "/etc/alertmanager/config"\n
Nov 07 15:00:36.710 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--k9nxs-w-d-4s998.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/07 14:31:03 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 14:31:03 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 14:31:03 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 14:31:03 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/07 14:31:03 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 14:31:03 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 14:31:03 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 14:31:03 http.go:96: HTTPS: listening on [::]:9095\n
Nov 07 15:01:02.111 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--k9nxs-w-b-wvsw7.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-07T15:00:58.318Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-07T15:00:58.321Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-07T15:00:58.321Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-07T15:00:58.322Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-07T15:00:58.322Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-07T15:00:58.322Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-07T15:00:58.322Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-07T15:00:58.322Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-07T15:00:58.322Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-07T15:00:58.322Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-07T15:00:58.323Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-07T15:00:58.322Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-07T15:00:58.323Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-07T15:00:58.323Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-07T15:00:58.325Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-07T15:00:58.325Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-07
Nov 07 15:06:59.641 E ns/openshift-marketplace pod/certified-operators-84f957bcd9-958jd node/ci-op--k9nxs-w-d-kjwmt.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 
Nov 07 15:09:07.992 E ns/openshift-machine-config-operator pod/machine-config-daemon-hcvpd node/ci-op--k9nxs-w-d-kjwmt.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 07 15:11:31.469 E ns/openshift-machine-config-operator pod/machine-config-daemon-8wh7s node/ci-op--k9nxs-w-d-kjwmt.c.openshift-gce-devel-ci.internal container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:11:31.469 E ns/openshift-machine-config-operator pod/machine-config-daemon-8wh7s node/ci-op--k9nxs-w-d-kjwmt.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:15:37.061 E ns/openshift-machine-config-operator pod/machine-config-daemon-lm5zx node/ci-op--k9nxs-w-d-kjwmt.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 07 15:31:41.091 E ns/openshift-machine-config-operator pod/machine-config-daemon-4r29g node/ci-op--k9nxs-w-d-kjwmt.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 

				
				Click to see stdout/stderrfrom junit_e2e_20191107-153255.xml

Find was mentions in log files


Show 57 Passed Tests

Show 8 Skipped Tests