ResultFAILURE
Tests 8 failed / 911 succeeded
Started2019-11-06 16:09
Elapsed3h16m
Work namespaceci-op-3lzlspzy
pod4.3.0-0.nightly-2019-11-05-234941-aws-fips

Test Failures


openshift-tests Monitor cluster while tests execute 32m3s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
4 error level events were detected during this test run:

Nov 06 18:51:46.348 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator kube-scheduler is reporting a failure: NodeControllerDegraded: The master node(s) "ip-10-0-136-119.ec2.internal" not ready\n* Cluster operator machine-config is reporting a failure: Failed to resync 4.3.0-0.nightly-2019-11-05-234941 because: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: false total: 3, ready 0, updated: 0, unavailable: 1)
Nov 06 19:00:35.482 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator kube-scheduler is reporting a failure: NodeControllerDegraded: The master node(s) "ip-10-0-136-119.ec2.internal" not ready\n* Cluster operator machine-config is reporting a failure: Failed to resync 4.3.0-0.nightly-2019-11-05-234941 because: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: false total: 3, ready 0, updated: 0, unavailable: 1)
Nov 06 19:04:41.784 E ns/default pod/recycler-for-nfs-4txxv node/ip-10-0-129-190.ec2.internal pod failed (DeadlineExceeded): Pod was active on the node longer than the specified deadline
Nov 06 19:09:39.758 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator kube-scheduler is reporting a failure: NodeControllerDegraded: The master node(s) "ip-10-0-136-119.ec2.internal" not ready\n* Cluster operator machine-config is reporting a failure: Failed to resync 4.3.0-0.nightly-2019-11-05-234941 because: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: false total: 3, ready 0, updated: 0, unavailable: 1)

				
				Click to see stdout/stderrfrom junit_e2e_20191106-192208.xml

Find failed mentions in log files


openshift-tests [Feature:Platform] Managed cluster should have no crashlooping pods in core namespaces over two minutes [Suite:openshift/conformance/parallel] 2m3s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:Platform\]\sManaged\scluster\sshould\shave\sno\scrashlooping\spods\sin\score\snamespaces\sover\stwo\sminutes\s\[Suite\:openshift\/conformance\/parallel\]$'
fail [github.com/openshift/origin/test/extended/operators/cluster.go:122]: Expected
    <[]string | len:4, cap:4>: [
        "Pod openshift-kube-apiserver/revision-pruner-6-ip-10-0-136-119.ec2.internal was pending entire time: unknown error",
        "Pod openshift-kube-controller-manager/revision-pruner-5-ip-10-0-136-119.ec2.internal was pending entire time: unknown error",
        "Pod openshift-kube-scheduler/revision-pruner-5-ip-10-0-136-119.ec2.internal was pending entire time: unknown error",
        "Pod openshift-machine-config-operator/etcd-quorum-guard-6cc8c5f8f4-dqll7 was pending entire time: unknown error",
    ]
to be empty
				
				Click to see stdout/stderrfrom junit_e2e_20191106-192208.xml

Filter through log files


openshift-tests [Feature:Platform][Smoke] Managed cluster should start all core operators [Suite:openshift/conformance/parallel] 1m3s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:Platform\]\[Smoke\]\sManaged\scluster\sshould\sstart\sall\score\soperators\s\[Suite\:openshift\/conformance\/parallel\]$'
fail [github.com/openshift/origin/test/extended/operators/operators.go:160]: Nov  6 18:44:47.049: Some cluster operators never became available /machine-config, /monitoring
				
				Click to see stdout/stderrfrom junit_e2e_20191106-192208.xml

Filter through log files


openshift-tests [Feature:Prometheus][Conformance] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog [Suite:openshift/conformance/parallel/minimal] 7m19s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:Prometheus\]\[Conformance\]\sPrometheus\swhen\sinstalled\son\sthe\scluster\sshouldn\'t\sreport\sany\salerts\sin\sfiring\sstate\sapart\sfrom\sWatchdog\s\[Suite\:openshift\/conformance\/parallel\/minimal\]$'
fail [github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:134]: Expected
    <map[string]error | len:1>: {
        "ALERTS{alertname!~\"Watchdog|ClusterMachineApproverDown|UsingDeprecatedAPIExtensionsV1Beta1\",alertstate=\"firing\"} >= 1": {
            s: "promQL query: ALERTS{alertname!~\"Watchdog|ClusterMachineApproverDown|UsingDeprecatedAPIExtensionsV1Beta1\",alertstate=\"firing\"} >= 1 had reported incorrect results: ALERTS{alertname=\"ClusterMonitoringOperatorReconciliationErrors\", alertstate=\"firing\", endpoint=\"http\", instance=\"10.128.0.40:8080\", job=\"cluster-monitoring-operator\", namespace=\"openshift-monitoring\", pod=\"cluster-monitoring-operator-6d78669c64-g7ftl\", service=\"cluster-monitoring-operator\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"ClusterOperatorDegraded\", alertstate=\"firing\", condition=\"Degraded\", endpoint=\"metrics\", instance=\"10.0.135.237:9099\", job=\"cluster-version-operator\", name=\"dns\", namespace=\"openshift-cluster-version\", pod=\"cluster-version-operator-757b5fc7cc-q42qp\", reason=\"NotAllDNSesAvailable\", service=\"cluster-version-operator\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"ClusterOperatorDegraded\", alertstate=\"firing\", condition=\"Degraded\", endpoint=\"metrics\", instance=\"10.0.135.237:9099\", job=\"cluster-version-operator\", name=\"kube-apiserver\", namespace=\"openshift-cluster-version\", pod=\"cluster-version-operator-757b5fc7cc-q42qp\", reason=\"NodeControllerDegradedMasterNodesReady\", service=\"cluster-version-operator\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"ClusterOperatorDegraded\", alertstate=\"firing\", condition=\"Degraded\", endpoint=\"metrics\", instance=\"10.0.135.237:9099\", job=\"cluster-version-operator\", name=\"kube-controller-manager\", namespace=\"openshift-cluster-version\", pod=\"cluster-version-operator-757b5fc7cc-q42qp\", reason=\"NodeControllerDegradedMasterNodesReady\", service=\"cluster-version-operator\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"ClusterOperatorDegraded\", alertstate=\"firing\", condition=\"Degraded\", endpoint=\"metrics\", instance=\"10.0.135.237:9099\", job=\"cluster-version-operator\", name=\"kube-scheduler\", namespace=\"openshift-cluster-version\", pod=\"cluster-version-operator-757b5fc7cc-q42qp\", reason=\"NodeControllerDegradedMasterNodesReady\", service=\"cluster-version-operator\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"ClusterOperatorDegraded\", alertstate=\"firing\", condition=\"Degraded\", endpoint=\"metrics\", instance=\"10.0.135.237:9099\", job=\"cluster-version-operator\", name=\"machine-config\", namespace=\"openshift-cluster-version\", pod=\"cluster-version-operator-757b5fc7cc-q42qp\", reason=\"RequiredPoolsFailed\", service=\"cluster-version-operator\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"ClusterOperatorDegraded\", alertstate=\"firing\", condition=\"Degraded\", endpoint=\"metrics\", instance=\"10.0.135.237:9099\", job=\"cluster-version-operator\", name=\"monitoring\", namespace=\"openshift-cluster-version\", pod=\"cluster-version-operator-757b5fc7cc-q42qp\", reason=\"UpdatingnodeExporterFailed\", service=\"cluster-version-operator\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"ClusterOperatorDown\", alertstate=\"firing\", endpoint=\"metrics\", instance=\"10.0.135.237:9099\", job=\"cluster-version-operator\", name=\"dns\", namespace=\"openshift-cluster-version\", pod=\"cluster-version-operator-757b5fc7cc-q42qp\", service=\"cluster-version-operator\", severity=\"critical\", version=\"4.3.0-0.nightly-2019-11-05-234941\"} => 1 @[1573067449.891]\nALERTS{alertname=\"ClusterOperatorDown\", alertstate=\"firing\", endpoint=\"metrics\", instance=\"10.0.135.237:9099\", job=\"cluster-version-operator\", name=\"kube-apiserver\", namespace=\"openshift-cluster-version\", pod=\"cluster-version-operator-757b5fc7cc-q42qp\", service=\"cluster-version-operator\", severity=\"critical\", version=\"4.3.0-0.nightly-2019-11-05-234941\"} => 1 @[1573067449.891]\nALERTS{alertname=\"ClusterOperatorDown\", alertstate=\"firing\", endpoint=\"metrics\", instance=\"10.0.135.237:9099\", job=\"cluster-version-operator\", name=\"kube-controller-manager\", namespace=\"openshift-cluster-version\", pod=\"cluster-version-operator-757b5fc7cc-q42qp\", service=\"cluster-version-operator\", severity=\"critical\", version=\"4.3.0-0.nightly-2019-11-05-234941\"} => 1 @[1573067449.891]\nALERTS{alertname=\"ClusterOperatorDown\", alertstate=\"firing\", endpoint=\"metrics\", instance=\"10.0.135.237:9099\", job=\"cluster-version-operator\", name=\"kube-scheduler\", namespace=\"openshift-cluster-version\", pod=\"cluster-version-operator-757b5fc7cc-q42qp\", service=\"cluster-version-operator\", severity=\"critical\", version=\"4.3.0-0.nightly-2019-11-05-234941\"} => 1 @[1573067449.891]\nALERTS{alertname=\"ClusterOperatorDown\", alertstate=\"firing\", endpoint=\"metrics\", instance=\"10.0.135.237:9099\", job=\"cluster-version-operator\", name=\"machine-config\", namespace=\"openshift-cluster-version\", pod=\"cluster-version-operator-757b5fc7cc-q42qp\", service=\"cluster-version-operator\", severity=\"critical\", version=\"4.3.0-0.nightly-2019-11-05-234941\"} => 1 @[1573067449.891]\nALERTS{alertname=\"ClusterOperatorDown\", alertstate=\"firing\", endpoint=\"metrics\", instance=\"10.0.135.237:9099\", job=\"cluster-version-operator\", name=\"monitoring\", namespace=\"openshift-cluster-version\", pod=\"cluster-version-operator-757b5fc7cc-q42qp\", service=\"cluster-version-operator\", severity=\"critical\", version=\"4.3.0-0.nightly-2019-11-05-234941\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetMisScheduled\", alertstate=\"firing\", daemonset=\"machine-config-daemon\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-machine-config-operator\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetMisScheduled\", alertstate=\"firing\", daemonset=\"machine-config-server\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-machine-config-operator\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetMisScheduled\", alertstate=\"firing\", daemonset=\"multus-admission-controller\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-multus\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetMisScheduled\", alertstate=\"firing\", daemonset=\"node-ca\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-image-registry\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetMisScheduled\", alertstate=\"firing\", daemonset=\"sdn-controller\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-sdn\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetRolloutStuck\", alertstate=\"firing\", daemonset=\"apiserver\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-apiserver\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetRolloutStuck\", alertstate=\"firing\", daemonset=\"controller-manager\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-controller-manager\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetRolloutStuck\", alertstate=\"firing\", daemonset=\"dns-default\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-dns\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetRolloutStuck\", alertstate=\"firing\", daemonset=\"multus\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-multus\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetRolloutStuck\", alertstate=\"firing\", daemonset=\"node-exporter\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-monitoring\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetRolloutStuck\", alertstate=\"firing\", daemonset=\"ovs\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-sdn\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetRolloutStuck\", alertstate=\"firing\", daemonset=\"sdn\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-sdn\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDaemonSetRolloutStuck\", alertstate=\"firing\", daemonset=\"tuned\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-cluster-node-tuning-operator\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeDeploymentReplicasMismatch\", alertstate=\"firing\", deployment=\"etcd-quorum-guard\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-machine-config-operator\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeNodeNotReady\", alertstate=\"firing\", condition=\"Ready\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-monitoring\", node=\"ip-10-0-136-119.ec2.internal\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"warning\", status=\"true\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeNodeNotReady\", alertstate=\"firing\", condition=\"Ready\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", namespace=\"openshift-monitoring\", node=\"ip-10-0-138-52.ec2.internal\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"warning\", status=\"true\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeNodeUnreachable\", alertstate=\"firing\", effect=\"NoSchedule\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", key=\"node.kubernetes.io/unreachable\", namespace=\"openshift-monitoring\", node=\"ip-10-0-136-119.ec2.internal\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubeNodeUnreachable\", alertstate=\"firing\", effect=\"NoSchedule\", endpoint=\"https-main\", instance=\"10.129.2.3:8443\", job=\"kube-state-metrics\", key=\"node.kubernetes.io/unreachable\", namespace=\"openshift-monitoring\", node=\"ip-10-0-138-52.ec2.internal\", pod=\"kube-state-metrics-54874f8db8-k6b7d\", service=\"kube-state-metrics\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubePodNotReady\", alertstate=\"firing\", namespace=\"openshift-kube-apiserver\", pod=\"revision-pruner-6-ip-10-0-136-119.ec2.internal\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubePodNotReady\", alertstate=\"firing\", namespace=\"openshift-kube-controller-manager\", pod=\"revision-pruner-5-ip-10-0-136-119.ec2.internal\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubePodNotReady\", alertstate=\"firing\", namespace=\"openshift-kube-scheduler\", pod=\"revision-pruner-5-ip-10-0-136-119.ec2.internal\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"KubePodNotReady\", alertstate=\"firing\", namespace=\"openshift-machine-config-operator\", pod=\"etcd-quorum-guard-6cc8c5f8f4-dqll7\", severity=\"critical\"} => 1 @[1573067449.891]\nALERTS{alertname=\"TargetDown\", alertstate=\"firing\", job=\"api\", namespace=\"openshift-apiserver\", service=\"api\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"TargetDown\", alertstate=\"firing\", job=\"controller-manager\", namespace=\"openshift-controller-manager\", service=\"controller-manager\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"TargetDown\", alertstate=\"firing\", job=\"crio\", namespace=\"kube-system\", service=\"kubelet\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"TargetDown\", alertstate=\"firing\", job=\"dns-default\", namespace=\"openshift-dns\", service=\"dns-default\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"TargetDown\", alertstate=\"firing\", job=\"etcd\", namespace=\"openshift-etcd\", service=\"etcd\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"TargetDown\", alertstate=\"firing\", job=\"kube-controller-manager\", namespace=\"openshift-kube-controller-manager\", service=\"kube-controller-manager\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"TargetDown\", alertstate=\"firing\", job=\"kubelet\", namespace=\"kube-system\", service=\"kubelet\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"TargetDown\", alertstate=\"firing\", job=\"machine-config-daemon\", namespace=\"openshift-machine-config-operator\", service=\"machine-config-daemon\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"TargetDown\", alertstate=\"firing\", job=\"multus-admission-controller-monitor-service\", namespace=\"openshift-multus\", service=\"multus-admission-controller-monitor-service\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"TargetDown\", alertstate=\"firing\", job=\"node-exporter\", namespace=\"openshift-monitoring\", service=\"node-exporter\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"TargetDown\", alertstate=\"firing\", job=\"scheduler\", namespace=\"openshift-kube-scheduler\", service=\"scheduler\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"TargetDown\", alertstate=\"firing\", job=\"sdn\", namespace=\"openshift-sdn\", service=\"sdn\", severity=\"warning\"} => 1 @[1573067449.891]\nALERTS{alertname=\"etcdMembersDown\", alertstate=\"firing\", job=\"etcd\", severity=\"critical\"} => 1 @[1573067449.891]",
        },
    }
to be empty