ResultFAILURE
Tests 4 failed / 52 succeeded
Started2019-11-07 06:30
Elapsed2h40m
Work namespaceci-op-bz4g7hvr
pod4.3.0-0.nightly-2019-11-07-062654-azure-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h21m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
21 error level events were detected during this test run:

Nov 07 07:28:53.089 E ns/openshift-monitoring pod/telemeter-client-856d49d8d6-djnnl node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=telemeter-client container exited with code 2 (Error): 
Nov 07 07:28:53.089 E ns/openshift-monitoring pod/telemeter-client-856d49d8d6-djnnl node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=reload container exited with code 2 (Error): 
Nov 07 07:28:53.157 E ns/openshift-ingress pod/router-default-55b895b987-66ntv node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=router container exited with code 2 (Error): ": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:23:21.314Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:23:26.281Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:23:31.289Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:24:18.211Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:24:23.186Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:24:38.964Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:24:47.437Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:25:43.250Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:25:48.224Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:27:01.767Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\nE1107 07:27:13.367619       1 limiter.go:140] error reloading router: wait: no child processes\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Nov 07 07:28:53.189 E ns/openshift-marketplace pod/certified-operators-58df5d9c69-44wwz node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=certified-operators container exited with code 2 (Error): 
Nov 07 07:28:53.208 E ns/openshift-marketplace pod/community-operators-d6477c9cb-zsd8x node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=community-operators container exited with code 2 (Error): 
Nov 07 07:28:54.091 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=config-reloader container exited with code 2 (Error): 2019/11/07 07:21:26 Watching directory: "/etc/alertmanager/config"\n
Nov 07 07:28:54.091 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=alertmanager-proxy container exited with code 2 (Error): 2019/11/07 07:21:27 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 07:21:27 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 07:21:27 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 07:21:27 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/07 07:21:27 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 07:21:27 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 07:21:27 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 07:21:27 http.go:96: HTTPS: listening on [::]:9095\n
Nov 07 07:30:07.395 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-07T07:30:04.596Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-07T07:30:04.597Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-07T07:30:04.598Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-07T07:30:04.606Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-07T07:30:04.606Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-07T07:30:04.606Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-07T07:30:04.606Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-07T07:30:04.606Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-07T07:30:04.606Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-07T07:30:04.606Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-07T07:30:04.606Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-07T07:30:04.606Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-07T07:30:04.606Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-07T07:30:04.607Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-07T07:30:04.607Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-07T07:30:04.607Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-07
Nov 07 07:33:15.427 E ns/openshift-ingress pod/router-default-55b895b987-p2rkn node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=router container exited with code 2 (Error): listening on HTTP and HTTPS	{"address": "0.0.0.0:1936"}\n2019-11-07T07:30:02.996Z	INFO	router.template	template/router.go:294	watching for changes	{"path": "/etc/pki/tls/private"}\nE1107 07:30:02.999956       1 haproxy.go:395] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory\n2019-11-07T07:30:03.077Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:30:03.077Z	INFO	router.router	router/router.go:257	router is including routes in all namespaces\n2019-11-07T07:30:03.391Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:30:08.327Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:30:13.321Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:30:19.888Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:30:25.199Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:30:40.135Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:30:56.311Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T07:33:10.688Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 07 07:33:15.427 E ns/openshift-machine-config-operator pod/machine-config-daemon-zdmv8 node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=oauth-proxy container exited with code 143 (Error): 
Nov 07 07:33:15.499 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=config-reloader container exited with code 2 (Error): 2019/11/07 07:30:02 Watching directory: "/etc/alertmanager/config"\n
Nov 07 07:33:15.499 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=alertmanager-proxy container exited with code 2 (Error): 2019/11/07 07:30:03 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 07:30:03 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 07:30:03 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 07:30:04 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/07 07:30:04 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 07:30:04 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 07:30:04 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 07:30:04 http.go:96: HTTPS: listening on [::]:9095\n
Nov 07 07:33:15.548 E ns/openshift-marketplace pod/community-operators-d6477c9cb-dzn4n node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=community-operators container exited with code 2 (Error): 
Nov 07 07:33:15.603 E ns/openshift-marketplace pod/redhat-operators-b9dcd8dc4-fq9rh node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=redhat-operators container exited with code 2 (Error): 
Nov 07 07:33:35.576 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-dgmgb container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-07T07:33:32.852Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-07T07:33:32.858Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-07T07:33:32.862Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-07T07:33:32.864Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-07T07:33:32.864Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-07T07:33:32.864Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-07T07:33:32.864Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-07T07:33:32.864Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-07T07:33:32.864Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-07T07:33:32.864Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-07T07:33:32.864Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-07T07:33:32.864Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-07T07:33:32.864Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-07T07:33:32.871Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-07T07:33:32.877Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-07T07:33:32.878Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-07
Nov 07 07:36:53.734 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 07 08:00:10.369 E ns/openshift-image-registry pod/node-ca-vntxs node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-x6z22 container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 08:18:32.454 E ns/openshift-authentication pod/oauth-openshift-dbd877fb9-h7xhq node/ci-op-bz4g7hvr-09f59-xnrz6-master-1 container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 08:29:25.216 E ns/openshift-dns pod/dns-default-clh9m node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-8p7vx container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 08:29:25.216 E ns/openshift-dns pod/dns-default-clh9m node/ci-op-bz4g7hvr-09f59-xnrz6-worker-westus-8p7vx container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 08:39:07.306 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available

				
				Click to see stdout/stderrfrom junit_e2e_20191107-084852.xml

Find was mentions in log files


openshift-tests [Feature:Machines][Serial] Managed cluster should grow and decrease when scaling different machineSets simultaneously [Suite:openshift/conformance/serial] 12m5s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:Machines\]\[Serial\]\sManaged\scluster\sshould\sgrow\sand\sdecrease\swhen\sscaling\sdifferent\smachineSets\ssimultaneously\s\[Suite\:openshift\/conformance\/serial\]$'
fail [github.com/openshift/origin/test/extended/machines/scale.go:201]: Timed out after 720.000s.
Expected
    <bool>: false
to be true