Using groupby with custom index
See original GitHub issueHello,
I have 6 hourly data (ERA Interim) for around 10 years. I want to calculate the annual 6 hourly climatology, i.e, 366*4 values, with each value corresponding to a 6 hourly interval. I am chunking the data along longitude. I’m using xarray 0.9.1 with Python 3.6 (Anaconda).
For a daily climatology on this data, I do the usual:
mean = data.groupby('time.dayofyear').mean(dim='time').compute()
For the 6 hourly version, I am trying the following:
test = (data['time.hour']/24 + data['time.dayofyear'])
test.name = 'dayHourly'
new_test = data.groupby(test).mean(dim='time').compute()
The first one (daily climatology) takes around 15 minutes for my data, whereas the second one ran for almost 30 minutes after which I gave up and killed the process.
Is there some obvious reason why the first is much faster than the second? data in both cases is the 6 hourly dataset. And is there an alternative way of expressing this computation which would make it faster?
TIA, Joy
Issue Analytics
- State:
- Created 7 years ago
- Comments:8 (5 by maintainers)

Top Related StackOverflow Question
We currently do all the groupby handling ourselves, which means that when you group over smaller units the dask graph gets bigger and each of the tasks gets smaller. Given that each chunk in the grouped data is only about ~250,000 elements, it’s not surprising that things get a bit slower – that’s near the point where Python overhead starts to get significant.
It would be useful to benchmark graph creation and execution separately (especially using dask-distributed’s profiling tools) to understand where the slow-down is.
One thing that might help quite a bit in cases like this where the individual groups are small is to rewrite xarray’s groupby to do some groupby operations inside dask, rather than in a loop outside of dask. That would allow executing tasks on bigger chunks of arrays at once, which could significantly reduce scheduler overhead.
Slightly OT observation: Performance issues are increasingly being raised here (see also #1301). Wouldn’t it be great if we had shared space somewhere in the cloud to host these big-ish datasets and run performance benchmarks in a controlled environment?