Support ZeRO-Infinity
See original GitHub issueIs your feature request related to a problem? Please describe.
I´m frustrated because I can´t use my Geforce MX 250 to train a 13B GPT-NeoX.
Describe the solution you’d like
In DeepSpeed v0.3.15 they introduces ZeRO-Infinity, a ZeRO-3 that also checkpoints to disk.
If I understand correctly, after approving https://github.com/EleutherAI/gpt-neox/pull/199 , a merge of https://github.com/EleutherAI/DeeperSpeed with upstream and exposing the parameters:
"offload_optimizer": {
"device": "cpu/gpu/nvme"
},
"offload_param": {
"device": "cpu/gpu/nvme"
}
Would be enough
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
UPPERCASE Designs Zero Infinity Support Creaseless ...
UPPERCASE Designs Zero Infinity Support Creaseless Premium Aluminum Headphone Stand Compatible with AirPods Max and Other Premium Headphones (Black).
Read more >ZeroInfinity (@ZeroInfinity074) / Twitter
We couldn't be the champions we are today without the support of our sponsors. These brands believed in what Blacklist International could achieve....
Read more >[2104.07857] ZeRO-Infinity: Breaking the GPU Memory Wall ...
Therefore, the growth in model scale has been supported primarily though ... In this paper we present ZeRO-Infinity, a novel heterogeneous ...
Read more >ZeRO-Infinity and DeepSpeed: Unlocking unprecedented ...
ZeRO-Infinity offers a leap of orders of magnitude in DL training system technology, opening a path to supporting the next 1,000x increase ...
Read more >ZeRO-infinity: breaking the GPU memory wall for extreme ...
Therefore, the growth in model scale has been supported primarily though system innovations that allow large models to fit in the aggregate ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@bratao Basically that the marginal improvement for Z-3 and Z-infinity over Z-1 + pipeline parallelism was minimal for training at the 100B scale.
Sounds fair @StellaAthena , hopefully in my upcoming vacation I can work on it. But I’m curious, can you share the Nvidia feedback?