scheduler.get_comm_cost a significant portion of runtime in merge benchmarks
See original GitHub issueI’ve been profiling distributed workflows in an effort to understand where there are potential performance improvements to be made (this is ongoing with @gjoseph92 amongst others). I’m particularly interested in scale-out scenarios, where the number of workers is large. As well as that scenario, I’ve also been looking at cases where the number of works is quite small, but dataframes have many partitions: this produces many tasks at a scale where debugging/profiling is a bit more manageable.
The benchmark setup I have builds two dataframes and then merges them on a key column with a specified matching fraction. Each worker gets P partitions with N rows per partition. I use 8 workers. I’m using cudf dataframes (so the merge itself is fast, which means that I notice sequential overheads sooner).
Attached two speedscope plots (and data) of py-spy based profiling of the scheduler in a scenario with eight workers, P=100, and N=500,000. In a shuffle, the total number of tasks peaks at about 150,000 per the dashboard. The second profile is very noisy since I’m using https://github.com/benfred/py-spy/pull/497 to avoid filtering out python builtins (so that we can see in more detail what is happening). Interestingly, at this scale we don’t see much of a pause in GC (but I am happy to try out more scenarios that might be relevant to #4987).
In this scenario, a single merge takes around 90s, if I do the minimal thing of letting Scheduler.get_comm_cost
return 0
immediately, this drops to around 50s (using pandas it drops from 170s to around 130s). From the detailed profile, we can see that the majority of this time is spent in set.difference
. I’m sure there’s a more reasonable fix that isn’t quite such a large hammer.
merge-scheduler-100-chunks-per-worker-no-filter.json.gz merge-scheduler-100-chunks-per-worker.json.gz
(cc @pentschev, @quasiben, and @rjzamora)
Issue Analytics
- State:
- Created a year ago
- Comments:10 (7 by maintainers)
Top GitHub Comments
Yes, it did, I’m about to follow up more coherently to @gjoseph92’s last query with a separate issue.
I will try this out.