question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Time out error when downloading images

See original GitHub issue

Describe the bug

Times out when scheduling/downloading images. Logs indicate timeout waiting for postgres volumes to be bound

Che version

  • latest

Steps to reproduce

ran chectl server:start --platform minikube --multiuser

Expected behavior

Eclipse che is deployed and a url is generated

Runtime

  • kubernetes (include output of kubectl version)
  • Openshift (include output of oc version)
  • minikube (include output of minikube version and kubectl version)
  • minishift (include output of minishift version and oc version)
  • docker-desktop + K8S (include output of docker version and kubectl version)
  • other: (please specify)

minikube version: v1.9.1 commit: d8747aec7ebf8332ddae276d5f8fb42d3152b5a1

Client Version: version.Info{Major:“1”, Minor:“18”, GitVersion:“v1.18.0”, GitCommit:“9e991415386e4cf155a24b1da15becaa390438d8”, GitTreeState:“clean”, BuildDate:“2020-03-25T14:58:59Z”, GoVersion:“go1.13.8”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“18”, GitVersion:“v1.18.0”, GitCommit:“9e991415386e4cf155a24b1da15becaa390438d8”, GitTreeState:“clean”, BuildDate:“2020-03-25T14:50:46Z”, GoVersion:“go1.13.8”, Compiler:“gc”, Platform:“linux/amd64”}

Screenshots

✔ Verify Kubernetes API…OK ✔ 👀 Looking for an already existing Eclipse Che instance ✔ Verify if Eclipse Che is deployed into namespace “che”…it is not ✔ ✈️ Minikube preflight checklist ✔ Verify if kubectl is installed ✔ Verify if minikube is installed ✔ Verify if minikube is running ↓ Start minikube [skipped] → Minikube is already running. ✔ Check Kubernetes version: Found v1.18.0. ✔ Verify if minikube ingress addon is enabled ✔ Enable minikube ingress addon ✔ Retrieving minikube IP and domain for ingress URLs…172.17.0.2.nip.io. Eclipse Che logs will be available in ‘/tmp/chectl-logs/1586030573786’ ✔ Start following logs ✔ Start following Operator logs…done ✔ Start following Eclipse Che logs…done ✔ Start following Postgres logs…done ✔ Start following Keycloak logs…done ✔ Start following Plugin registry logs…done ✔ Start following Devfile registry logs…done ✔ Start following events ✔ Start following namespace events…done ✔ 🏃‍ Running the Eclipse Che operator ✔ Copying operator resources…done. ✔ Create Namespace (che)…It already exists. ✔ Create ServiceAccount che-operator in namespace che…It already exists. ✔ Create Role che-operator in namespace che…It already exists. ✔ Create ClusterRole che-operator…It already exists. ✔ Create RoleBinding che-operator in namespace che…It already exists. ✔ Create ClusterRoleBinding che-operator…It already exists. ✔ Create CRD checlusters.org.eclipse.che…It already exists. ✔ Waiting 5 seconds for the new Kubernetes resources to get flushed…done. ✔ Create deployment che-operator in namespace che…It already exists. ✔ Create Eclipse Che cluster eclipse-che in namespace che…It already exists. ❯ ✅ Post installation checklist ❯ Eclipse Che pod bootstrap ✖ scheduling → ERR_TIMEOUT: Timeout set to pod wait timeout 300000. podExist: false, currentPhase: undefined downloading images starting Retrieving Eclipse Che server URL Eclipse Che status check › Error: Error: ERR_TIMEOUT: Timeout set to pod wait timeout 300000. podExist: false, currentPhase: undefined › Installation failed, check logs in ‘/tmp/chectl-logs/1586030573786’

Installation method

  • chectl <–>chectl server:start --platform minikube --multiuser

Environment

  • [x ] my computer
    • Windows
    • Linux
    • macOS
  • Cloud
    • Amazon
    • Azure
    • GCE
    • other (please specify)
  • other: please specify

Eclipse Che Logs

time=“2020-04-04T20:02:45Z” level=info msg=“Default ‘info’ log level is applied” time=“2020-04-04T20:02:45Z” level=info msg=“Go Version: go1.12.12” time=“2020-04-04T20:02:45Z” level=info msg=“Go OS/Arch: linux/amd64” time=“2020-04-04T20:02:45Z” level=info msg=“operator-sdk Version: v0.5.0” time=“2020-04-04T20:02:45Z” level=info msg=“Operator is running on Kubernetes” time=“2020-04-04T20:02:45Z” level=info msg=“Registering Che Components Types” time=“2020-04-04T20:02:45Z” level=info msg=“Starting the Cmd” time=“2020-04-04T20:02:45Z” level=info msg=“Waiting for PVC postgres-data to be bound. Default timeout: 10 seconds” time=“2020-04-04T20:02:55Z” level=warning msg=“Timeout waiting for a PVC postgres-data to be bound. Current phase is Pending” time=“2020-04-04T20:02:55Z” level=warning msg=“Sometimes PVC can be bound only when the first consumer is created” time=“2020-04-04T20:02:56Z” level=info msg=“Waiting for deployment postgres. Default timeout: 420 seconds”

LAST SEEN TYPE REASON OBJECT MESSAGE
22m Normal Scheduled pod/che-operator-7b9fd956cb-fwbt8 Successfully assigned che/che-operator-7b9fd956cb-fwbt8 to minikube
22m Normal Pulling pod/che-operator-7b9fd956cb-fwbt8 Pulling image “quay.io/eclipse/che-operator:7.10.0”
21m Normal Pulled pod/che-operator-7b9fd956cb-fwbt8 Successfully pulled image “quay.io/eclipse/che-operator:7.10.0”
21m Normal Created pod/che-operator-7b9fd956cb-fwbt8 Created container che-operator
21m Normal Started pod/che-operator-7b9fd956cb-fwbt8 Started container che-operator
18s Normal SandboxChanged pod/che-operator-7b9fd956cb-fwbt8 Pod sandbox changed, it will be killed and re-created.
16s Normal Pulling pod/che-operator-7b9fd956cb-fwbt8 Pulling image “quay.io/eclipse/che-operator:7.10.0”
13s Normal Pulled pod/che-operator-7b9fd956cb-fwbt8 Successfully pulled image “quay.io/eclipse/che-operator:7.10.0”
13s Normal Created pod/che-operator-7b9fd956cb-fwbt8 Created container che-operator
13s Normal Started pod/che-operator-7b9fd956cb-fwbt8 Started container che-operator
22m Normal SuccessfulCreate replicaset/che-operator-7b9fd956cb Created pod: che-operator-7b9fd956cb-fwbt8
22m Normal ScalingReplicaSet deployment/che-operator Scaled up replica set che-operator-7b9fd956cb to 1
6m46s Warning FailedScheduling pod/postgres-6448d66f7f-2hn8w running “VolumeBinding” filter plugin for pod “postgres-6448d66f7f-2hn8w”$
9s Warning FailedScheduling pod/postgres-6448d66f7f-2hn8w running “VolumeBinding” filter plugin for pod “postgres-6448d66f7f-2hn8w”$
21m Normal SuccessfulCreate replicaset/postgres-6448d66f7f Created pod: postgres-6448d66f7f-2hn8w
6m30s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
21m Normal ScalingReplicaSet deployment/postgres Scaled up replica set postgres-6448d66f7f to 1
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Warning FailedScheduling pod/postgres-6448d66f7f-2hn8w running “VolumeBinding” filter plugin for pod “postgres-6448d66f7f-2hn8w”$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Warning FailedScheduling pod/postgres-6448d66f7f-2hn8w running “VolumeBinding” filter plugin for pod “postgres-6448d66f7f-2hn8w”$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner "k8s.i$
0s Normal ExternalProvisioning persistentvolumeclaim/postgres-data waiting for a volume to be created, either by external provisioner

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:13 (9 by maintainers)

github_iconTop GitHub Comments

2reactions
gattyttocommented, Jun 15, 2020

storage provision error, yes. Workaround is to use the storageClassName in crd:

minikube creates a VM for setting up the cluster so /data and /data/wksp have to be created and chmod 777 in the vm for this to work. Sames goes to whatever path you choose if you modify this values.

SIDE NOTE: this could also require to disable default tls option in yaml too: tlsSupport: false

SIDE NOTE2: also the domain should be forced in yaml: ingressDomain: ‘minikube-lan-ip.nip.io’

#file: /usr/local/lib/chectl/templates/che-operator/crds/org_v1_che_cr.yaml
postgresPVCStorageClassName: eclipseche
workspacePVCStorageClassName: eclipsechewksp
ingressDomain: 'minikube-lan-ip.nip.io' #CHANGE TO A REAL minikube-lan-ip
tlsSupport: false 

create storage classes and volumes accordingly:

#file: storageclass_and_volumes.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: eclipsechewksp
  labels:
    type: local
spec:
  storageClassName: eclipsechewksp
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/wksp"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: eclipseche
  labels:
    type: local
spec:
  storageClassName: eclipseche
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/"
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: eclipsechewksp
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Retain
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: eclipseche
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Retain

after this use the additional argument in chectl server:start:

chectl server:start --platform minikube --multiuser --che-operator-cr-yaml=/usr/local/lib/chectl/templates/che-operator/crds/org_v1_che_cr.yaml

upon attempts to start chectl (using chectl server:delete and server:start again) the postgres folder (called userdata) has to be removed and the volumes in the minikube cluster have to ve removed and created again (using kubectl delete -f and apply -f with the provided yaml).

so to recap: to remove the unsuccessfull che start garbage files and volumes.

chectl server:delete 
kubectl delete -f <storageclass_and_volumes.yaml> 
rm -rf /data/userdata

to try again:

kubectl apply -f <storageclass_and_volumes.yaml> 
chectl server:start --platform minikube --multiuser --che-operator-cr-yaml=/usr/local/lib/chectl/templates/che-operator/crds/org_v1_che_cr.yaml
0reactions
tolushacommented, Apr 29, 2020

@cbyreddy I am closing this one. Feel free to open a new issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

The Wait Operation Timed Out on Windows 10/11
Solution 1: Restart Windows 10/11 ... It is said that before you go deeper into this photo error the wait operation timed out,...
Read more >
Error: Oops! A timeout occurred while downloading the media ...
Facebook's servers are sending us back a `-2` error, which means that it's taking too long to download the media attached to the...
Read more >
Fix: 'The Wait Operation Timed Out' Error in Photos App ...
The Wait Operation Timed Out error means code break due to that unhandled exception occurring when the request is executed. In fact, the...
Read more >
How to Fix an Internet Download that "Timed Out" - Azcentral
This problem can't be solved on your end; the only solution to this problem is to redownload the file once it becomes available...
Read more >
The wait operation timed out while opening Pictures or Videos
The wait operation timed out error in Photos app · 1] Run Troubleshooters · 2] Reinstall Photos app or Movies & TV app...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found