question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

617 pods created by che and increasing

See original GitHub issue

Describe the bug

617 pods created by che and increasing

Che version

  • latest
  • nightly
  • other: please specify 7.1.0

Steps to reproduce

I have no idea how it happened, all I have are about 3 workspaces (they are running just fine) All i remember, at some point, I was trying dev and reported some bug about it https://github.com/eclipse/che/issues/14590

I tried creating multiple test workspaces but eventually I would delete them

I went to office, got back home finding che’s pod is instable and 549 which 1 hour later became 617

I have k8s cluster deployed with worker that is 32 GB memory, 6 CPU and 1TB NFS storage that are impossible to be 100% consumed by Che a lone!!! And I have almost nothing but Che on that cluster, see screenshot below for more insights.

I do not upload videos, maximum storage i need for my dev environment is at best 3 GB, I am using 2.3 GB at the moment, the entire cluster has almost nothing but che (screenshots will show)

I did check https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/ but please keep in mind about this >> https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#best-practices

Operator wants to evict Pods at 95% memory utilization to reduce incidence of system OOM.

Expected behavior

Runtime

  • kubernetes (include output of kubectl version)
  • Openshift (include output of oc version)
  • minikube (include output of minikube version and kubectl version)
  • minishift (include output of minishift version and oc version)
  • docker-desktop + K8S (include output of docker version and kubectl version)
  • other: (please specify)
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-21T15:34:43Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-21T15:23:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Screenshots

Che unstable: image Nodes allocated resources: image image Consumed storage out of 1TB: image Running workspaces just fine (I believe that those are the 3 that all I have): image Rest of the world of 617 pods: image Pods: image

Installation method

  • chectl
  • che-operator
  • minishift-addon
  • I don’t know
chectl server:start --installer=helm --domain=MyDomain.com --multiuser --platform=k8s --tls

Environment

Additional context

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:11 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
SDAdhamcommented, Sep 24, 2019

Based on our investigation @sleshchenko & I:

Long story short, the pods were reaching out their own limit from storage perspective as the limits of the workspaces created by Che was always 1Gi while I was trying to clone project that’s > 1Gi.

@sleshchenko were able to pin the parts were the 1Gi limit is assigned:

Workaround:

Setting CHE_WORKSPACE_PROJECTS_STORAGE_DEFAULT_SIZE & CHE_INFRA_KUBERNETES_PVC_QUANTITY to 10Gi in the che’s ConfigMap then restarting Che’s server allowed Che to create future workspaces with sufficient storage size.

Suggestion:

When creating a new workspace, as much as get to set memory, maybe we should also be able to set storage size?

0reactions
che-botcommented, Mar 25, 2020

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Deployments - Kubernetes
A Deployment provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment, and the Deployment Controller changes the ......
Read more >
enable siri airpods pro
Enable Siri Airpods ProThe Airpods Pro microphone might not work on your Windows ... Step 1: Open the "Settings" app on your iPhone....
Read more >
MAPPA, analisi dei consumi non produttivi
MAPPA è l'analisi dei consumi energetici della tua azienda che individua gli ... li integra con i dati di consumo relativi al POD...
Read more >
How to Debug Kubernetes Pending Pods and Scheduling ...
Several predicates are designed to check that nodes meet certain requirements before allowing Pending pods to get scheduled on them.
Read more >
OpenShift Scale: Running 500 Pods Per Node
On the hypothesis that the issue was rate of pod creation, ... Additional user-defined secrets and configmaps only increase the pressure on ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found