Support for Deepspeed stage-3
See original GitHub issue🚀 Feature Request
The documentation here states that stage-3 is not yet supported.
https://docs.mosaicml.com/en/v0.10.0/notes/distributed_training.html#deepspeed
I tried passing this config to the trainer and it seems to work:
deepspeed_config = {"zero_optimization": {"stage": 3, "stage3_gather_16bit_weights_on_model_save": True}}
Exactly what is missing in stage-3 support from the trainer? Is the saving of checkpoints not configured properly for saving from state_dict
of stage-3?
Motivation
Stage-3 would allow for model parameter sharding using deepspeed and allow for much larger model training with composer.
Issue Analytics
- State:
- Created a year ago
- Comments:6 (6 by maintainers)
Top Results From Across the Web
ZeRO — DeepSpeed 0.8.0 documentation - Read the Docs
This frees up GPU memory for larger models or batch sizes. Valid only with stage 3. Expects a dictionary containing values for DeepSpeedZeroOffloadParamConfig...
Read more >DeepSpeed/stage3.py at master
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. - DeepSpeed/stage3.py at ...
Read more >DeepSpeed Integration
Currently it provides full support for: Optimizer state partitioning (ZeRO stage 1); Gradient partitioning (ZeRO stage 2); Parameter partitioning (ZeRO stage 3) ...
Read more >DeepSpeed ZeRO-3 Offload
Unlike data parallelism (that is efficient but can only support a ... 50% communication overhead of ZeRO Stage 3 compared to standard data ......
Read more >Making DeepSpeed ZeRO run efficiently on more- ...
By making DeepSpeed ZeRO Stage 3 performant on widely available public cloud ... We are Amazon Robotics and we will give you the...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@karan6181 I wasn’t able to make bloom work with fsdp properly. I think bloom was trained using megatron LM with deepspeed and hasn’t been tested with fsdp. And it works out of the box with deepspeed stage-2 in composer. So for my current research I would prefer to stick to deepspeed and not delve into trying to make bloom work with fsdp!
@ananyahjha93 note that we just updated Composer to support the most recent deepspeed release if this is still an issue