Parallel all reduce communication and backpropSee original GitHub issue
Thank you for open source such a great repo for the community! Your work is really helping our team with training large pretrained model 😃
In our experiment, we find out when training a not-that-large model, e.g. 2.7B, with data parallel, the scaling efficiency across multiple nodes is not good enough (under 70% for 2 nodes in our case). A reason for this is that currenly the backward calculation (“BackwardPass” instruction) and the communication (introduced in “ReduceGrads” instruction) are executed sequentially. In fact, if we start the allreduce communication right after each grad is calculated, we could parallel the backward computation and the ReduceGrads, reducing the negative effect on cross-node communication.
We could use the backward hook mechaism in pytorch for this optimization. Here is an example in the source code of pytorch.
This optimization may only work for pure data parallel as the communication pattern is quite different in model parallel or pipeline parallel.
We’d love to help if you have interest in applying such optimization to your project (gpt-neox or DeeperSpeed)~ Thank you again for your great contribution to the community!
P.S. We found some different behavior compare to the comment here: https://github.com/EleutherAI/gpt-neox/blob/f6c611f3211521fa7b145950ea100f44a2d0ead6/megatron/neox_arguments/arguments.py#L755-L758
- In our experiement, the
PipelineModulewrapper is used when
pipe_parallel_sizeis set to 1 and the
to_sequential()version is used only when
pipe_parallel_sizeis set to 0;
PipelineModuleis observably faster than the
I wonder if these are expected behavior? Thank you.
- Created 2 years ago
- Comments:6 (4 by maintainers)
Top GitHub Comments
Hey @zhuzilin , really interesting!
firstly, wrt to the speed difference between pp=0 and pp=1, we also found a similar thing, see https://github.com/EleutherAI/gpt-neox/pull/269 . Although maybe the speed difference isn’t quite as stark as what you found. I’m not sure of the source of the difference.
wrt the optimization, I see no reason this couldn’t also work with MP and PP, and we’d be very interested in getting something like this implemented. I suspect it might not be so straightforward with deepspeed though! Fundamentally, you’re doing the same communication op with MP / PP, just the group you’re reducing within is smaller. So I think this should definitely be possible, but i’m not yet certain how this optimization would interact with:
- deepspeed. All training currently relies on deepspeed engine, and they “handle” DP optimization for you. We would have to figure out how to fully handle this ourselves, or implement the optimization into deepspeed. We’re trying to remove our dependency on deepspeed and move to OSLO, but this will likely take a while.)
- Zero 1 / 2. This also ties in with the above, since these optimizers are implemented in deepspeed. But making this optimization compatible with zero 1 / 2 optimizers would likely require some more work.