Time out error when downloading images
See original GitHub issueDescribe the bug
Times out when scheduling/downloading images. Logs indicate timeout waiting for postgres volumes to be bound
Che version
- latest
Steps to reproduce
ran chectl server:start --platform minikube --multiuser
Expected behavior
Eclipse che is deployed and a url is generated
Runtime
- kubernetes (include output of
kubectl version
) - Openshift (include output of
oc version
) - minikube (include output of
minikube version
andkubectl version
) - minishift (include output of
minishift version
andoc version
) - docker-desktop + K8S (include output of
docker version
andkubectl version
) - other: (please specify)
minikube version: v1.9.1 commit: d8747aec7ebf8332ddae276d5f8fb42d3152b5a1
Client Version: version.Info{Major:“1”, Minor:“18”, GitVersion:“v1.18.0”, GitCommit:“9e991415386e4cf155a24b1da15becaa390438d8”, GitTreeState:“clean”, BuildDate:“2020-03-25T14:58:59Z”, GoVersion:“go1.13.8”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“18”, GitVersion:“v1.18.0”, GitCommit:“9e991415386e4cf155a24b1da15becaa390438d8”, GitTreeState:“clean”, BuildDate:“2020-03-25T14:50:46Z”, GoVersion:“go1.13.8”, Compiler:“gc”, Platform:“linux/amd64”}
Screenshots
✔ Verify Kubernetes API…OK ✔ 👀 Looking for an already existing Eclipse Che instance ✔ Verify if Eclipse Che is deployed into namespace “che”…it is not ✔ ✈️ Minikube preflight checklist ✔ Verify if kubectl is installed ✔ Verify if minikube is installed ✔ Verify if minikube is running ↓ Start minikube [skipped] → Minikube is already running. ✔ Check Kubernetes version: Found v1.18.0. ✔ Verify if minikube ingress addon is enabled ✔ Enable minikube ingress addon ✔ Retrieving minikube IP and domain for ingress URLs…172.17.0.2.nip.io. Eclipse Che logs will be available in ‘/tmp/chectl-logs/1586030573786’ ✔ Start following logs ✔ Start following Operator logs…done ✔ Start following Eclipse Che logs…done ✔ Start following Postgres logs…done ✔ Start following Keycloak logs…done ✔ Start following Plugin registry logs…done ✔ Start following Devfile registry logs…done ✔ Start following events ✔ Start following namespace events…done ✔ 🏃 Running the Eclipse Che operator ✔ Copying operator resources…done. ✔ Create Namespace (che)…It already exists. ✔ Create ServiceAccount che-operator in namespace che…It already exists. ✔ Create Role che-operator in namespace che…It already exists. ✔ Create ClusterRole che-operator…It already exists. ✔ Create RoleBinding che-operator in namespace che…It already exists. ✔ Create ClusterRoleBinding che-operator…It already exists. ✔ Create CRD checlusters.org.eclipse.che…It already exists. ✔ Waiting 5 seconds for the new Kubernetes resources to get flushed…done. ✔ Create deployment che-operator in namespace che…It already exists. ✔ Create Eclipse Che cluster eclipse-che in namespace che…It already exists. ❯ ✅ Post installation checklist ❯ Eclipse Che pod bootstrap ✖ scheduling → ERR_TIMEOUT: Timeout set to pod wait timeout 300000. podExist: false, currentPhase: undefined downloading images starting Retrieving Eclipse Che server URL Eclipse Che status check › Error: Error: ERR_TIMEOUT: Timeout set to pod wait timeout 300000. podExist: false, currentPhase: undefined › Installation failed, check logs in ‘/tmp/chectl-logs/1586030573786’
Installation method
- chectl <–>chectl server:start --platform minikube --multiuser
Environment
- [x ] my computer
- Windows
- Linux
- macOS
- Cloud
- Amazon
- Azure
- GCE
- other (please specify)
- other: please specify
Eclipse Che Logs
time=“2020-04-04T20:02:45Z” level=info msg=“Default ‘info’ log level is applied” time=“2020-04-04T20:02:45Z” level=info msg=“Go Version: go1.12.12” time=“2020-04-04T20:02:45Z” level=info msg=“Go OS/Arch: linux/amd64” time=“2020-04-04T20:02:45Z” level=info msg=“operator-sdk Version: v0.5.0” time=“2020-04-04T20:02:45Z” level=info msg=“Operator is running on Kubernetes” time=“2020-04-04T20:02:45Z” level=info msg=“Registering Che Components Types” time=“2020-04-04T20:02:45Z” level=info msg=“Starting the Cmd” time=“2020-04-04T20:02:45Z” level=info msg=“Waiting for PVC postgres-data to be bound. Default timeout: 10 seconds” time=“2020-04-04T20:02:55Z” level=warning msg=“Timeout waiting for a PVC postgres-data to be bound. Current phase is Pending” time=“2020-04-04T20:02:55Z” level=warning msg=“Sometimes PVC can be bound only when the first consumer is created” time=“2020-04-04T20:02:56Z” level=info msg=“Waiting for deployment postgres. Default timeout: 420 seconds”
LAST SEEN | TYPE | REASON | OBJECT | MESSAGE |
---|---|---|---|---|
22m | Normal | Scheduled | pod/che-operator-7b9fd956cb-fwbt8 | Successfully assigned che/che-operator-7b9fd956cb-fwbt8 to minikube |
22m | Normal | Pulling | pod/che-operator-7b9fd956cb-fwbt8 | Pulling image “quay.io/eclipse/che-operator:7.10.0” |
21m | Normal | Pulled | pod/che-operator-7b9fd956cb-fwbt8 | Successfully pulled image “quay.io/eclipse/che-operator:7.10.0” |
21m | Normal | Created | pod/che-operator-7b9fd956cb-fwbt8 | Created container che-operator |
21m | Normal | Started | pod/che-operator-7b9fd956cb-fwbt8 | Started container che-operator |
18s | Normal | SandboxChanged | pod/che-operator-7b9fd956cb-fwbt8 | Pod sandbox changed, it will be killed and re-created. |
16s | Normal | Pulling | pod/che-operator-7b9fd956cb-fwbt8 | Pulling image “quay.io/eclipse/che-operator:7.10.0” |
13s | Normal | Pulled | pod/che-operator-7b9fd956cb-fwbt8 | Successfully pulled image “quay.io/eclipse/che-operator:7.10.0” |
13s | Normal | Created | pod/che-operator-7b9fd956cb-fwbt8 | Created container che-operator |
13s | Normal | Started | pod/che-operator-7b9fd956cb-fwbt8 | Started container che-operator |
22m | Normal | SuccessfulCreate | replicaset/che-operator-7b9fd956cb | Created pod: che-operator-7b9fd956cb-fwbt8 |
22m | Normal | ScalingReplicaSet | deployment/che-operator | Scaled up replica set che-operator-7b9fd956cb to 1 |
6m46s | Warning | FailedScheduling | pod/postgres-6448d66f7f-2hn8w | running “VolumeBinding” filter plugin for pod “postgres-6448d66f7f-2hn8w”$ |
9s | Warning | FailedScheduling | pod/postgres-6448d66f7f-2hn8w | running “VolumeBinding” filter plugin for pod “postgres-6448d66f7f-2hn8w”$ |
21m | Normal | SuccessfulCreate | replicaset/postgres-6448d66f7f | Created pod: postgres-6448d66f7f-2hn8w |
6m30s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
21m | Normal | ScalingReplicaSet | deployment/postgres | Scaled up replica set postgres-6448d66f7f to 1 |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Warning | FailedScheduling | pod/postgres-6448d66f7f-2hn8w | running “VolumeBinding” filter plugin for pod “postgres-6448d66f7f-2hn8w”$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Warning | FailedScheduling | pod/postgres-6448d66f7f-2hn8w | running “VolumeBinding” filter plugin for pod “postgres-6448d66f7f-2hn8w”$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner "k8s.i$ |
0s | Normal | ExternalProvisioning | persistentvolumeclaim/postgres-data | waiting for a volume to be created, either by external provisioner |
Issue Analytics
- State:
- Created 3 years ago
- Comments:13 (9 by maintainers)
Top GitHub Comments
storage provision error, yes. Workaround is to use the storageClassName in crd:
minikube creates a VM for setting up the cluster so /data and /data/wksp have to be created and chmod 777 in the vm for this to work. Sames goes to whatever path you choose if you modify this values.
SIDE NOTE: this could also require to disable default tls option in yaml too: tlsSupport: false
SIDE NOTE2: also the domain should be forced in yaml: ingressDomain: ‘minikube-lan-ip.nip.io’
create storage classes and volumes accordingly:
after this use the additional argument in chectl server:start:
upon attempts to start chectl (using chectl server:delete and server:start again) the postgres folder (called userdata) has to be removed and the volumes in the minikube cluster have to ve removed and created again (using kubectl delete -f and apply -f with the provided yaml).
so to recap: to remove the unsuccessfull che start garbage files and volumes.
to try again:
@cbyreddy I am closing this one. Feel free to open a new issue.