Default k8s resource requests/limits
See original GitHub issueUse Case
When installing Dagster - via the Dagster Helm chart - on a resource constrained (or autoscaling) K8s cluster, having resource requests/limits set for all k8s containers is useful to:
(a) prevent accidental overcommit / OOM errors and (b) trigger (GKE) autoscaling rules.
Ideas of Implementation
-
A default install (via the Helm chart; with no additional user config) should configure all k8s containers to have sane minimum resource limits - eg:
resources: limits: cpu: 200m memory: 256Mi requests: cpu: 200m memory: 256Mi
This should include all containers created by the initial install and those created via Pipeline Runs
-
It should be possible to override these defaults on a case by case basis via any of the existing mechanisms (eg; extra Helm values config or
tags = { 'dagster-k8s/config': {}}
on any of the@solid
s
Additional Info
- This handy bit of
kubectl
shows all the configured resource requests/limits for all pods in thedagster
namespacekubectl --namespace dagster get pods \ -o custom-columns=NAME:.metadata.name,IMAGE:.spec.containers[0].image,STATUS:.status.phase,\ 'CPU(request)':{.spec.containers[*].resources.requests.cpu},'CPU(limit)':{.spec.containers[*].resources.limits.cpu},\ 'MEMORY(request)':{.spec.containers[*].resources.requests.memory},'MEMORY(limit)':{.spec.containers[*].resources.limits.memory}
- It may be possible to configure defaults by adding a LimitRange object to the namespace - I’d be happy to submit a PR with a “works on my GKE cluster” implementation if this seems like a sane approach
Message from the maintainers:
Excited about this feature? Give it a 👍. We factor engagement into prioritization.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:7 (7 by maintainers)
@chenbobby Given the changes above; I think we should close this issue.
WDYT?
@chenbobby Thanks for the thoughts - I’ll take a stab at a PR; but might only get to it this weekend.