Support for nprocs?
See original GitHub issueIs there was any interest in building in support for nprocs? I know in #84 the consensus was that having a 1-1 relationship between processes and pods makes the most sense.
We use nprocs because
- Some workloads work better with processes vs threads.
- We prefer to think in terms of machines rather than pods.
I’ve considered thinking in pods rather than machines but for the clusters we manage, machines are the fundamental unit people pay for, and it’s easy to end up in a situation where machines are under-utilized at the k8s level. Yes k8s can move pods around, but that ends up potentially disrupting longer running workloads.
For the most part using dask-kubernetes with nprocs>1
has worked pretty well. It can get a little goofy because if nprocs=4
and I call scale(4)
I end up with 16 workers. I think the most value would be accomplished in making adaptive
understand nprocs
So the question is just if anyone else cares about this? If it’s just me, I’ll subclass Adaptive
and call it a day. Otherwise I can add this functionality into dask-kubernetes
.
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (2 by maintainers)
Top GitHub Comments
@jacobtomlinson I’m not sure #2 needs to be addressed.
The scheduler already understands hosts:
https://github.com/dask/distributed/blob/1be9265ac11876df766bb8bd6d6eb519d04d3bac/distributed/scheduler.py#L6398
and Adaptive supports configuring that parameter
https://github.com/dask/distributed/blob/1be9265ac11876df766bb8bd6d6eb519d04d3bac/distributed/deploy/adaptive.py#L93
I think we would only need to modify
dask-kubernetes
to configureAdaptive
with the proper key?Just a note that I just started digging around, and I’m not sure this is an issue (was looking at 2021.07 earlier last week). I believe the recommendations I’m getting back for the scheduler are for whole pods, but I can confirm on this issue later on when I can dig deeper.
I do think there is an issue where while pods are starting, dask_kubernetes does not know that they are starting. I had a situation where the scheduler wanted to scale down to 1, and it resulted in all pods being shut down, except for one that was still in the process of starting up. When I confirm that, I will write it up as a separate issue, and possibly close this one.