question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Support for Deepspeed stage-3

See original GitHub issue

🚀 Feature Request

The documentation here states that stage-3 is not yet supported.

https://docs.mosaicml.com/en/v0.10.0/notes/distributed_training.html#deepspeed

I tried passing this config to the trainer and it seems to work:

deepspeed_config = {"zero_optimization": {"stage": 3, "stage3_gather_16bit_weights_on_model_save": True}}

Exactly what is missing in stage-3 support from the trainer? Is the saving of checkpoints not configured properly for saving from state_dict of stage-3?

Motivation

Stage-3 would allow for model parameter sharding using deepspeed and allow for much larger model training with composer.

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
ananyahjha93commented, Oct 4, 2022

@karan6181 I wasn’t able to make bloom work with fsdp properly. I think bloom was trained using megatron LM with deepspeed and hasn’t been tested with fsdp. And it works out of the box with deepspeed stage-2 in composer. So for my current research I would prefer to stick to deepspeed and not delve into trying to make bloom work with fsdp!

0reactions
mvpatel2000commented, Dec 6, 2022

@ananyahjha93 note that we just updated Composer to support the most recent deepspeed release if this is still an issue

Read more comments on GitHub >

github_iconTop Results From Across the Web

ZeRO — DeepSpeed 0.8.0 documentation - Read the Docs
This frees up GPU memory for larger models or batch sizes. Valid only with stage 3. Expects a dictionary containing values for DeepSpeedZeroOffloadParamConfig...
Read more >
DeepSpeed/stage3.py at master
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. - DeepSpeed/stage3.py at ...
Read more >
DeepSpeed Integration
Currently it provides full support for: Optimizer state partitioning (ZeRO stage 1); Gradient partitioning (ZeRO stage 2); Parameter partitioning (ZeRO stage 3) ...
Read more >
DeepSpeed ZeRO-3 Offload
Unlike data parallelism (that is efficient but can only support a ... 50% communication overhead of ZeRO Stage 3 compared to standard data ......
Read more >
Making DeepSpeed ZeRO run efficiently on more- ...
By making DeepSpeed ZeRO Stage 3 performant on widely available public cloud ... We are Amazon Robotics and we will give you the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found