Multi-GPU Support
See original GitHub issueHello,
Have you tried training on Multi-GPU setup? I tried running your fine-tuning example like so:
export CUDA_VISIBLE_DEVICES=0,1
python -m src.pl_train -c t03b.json+ia3.json+rte.json -k load_weight="pretrained_checkpoints/t03b_ia3_finish.pt" exp_name=t03b_rte_seed42_ia3_pretrained100k few_shot_random_seed=42 seed=42
But I get errors in the lightning data loaders.
Any Ideas? Thank you
Issue Analytics
- State:
- Created a year ago
- Comments:8
Top Results From Across the Web
Multi-GPU FAQ - NVIDIA
How many applications support multiple GPUs? Multiple GPUs are currently supported by over 40 applications with several additional applications in the pipeline.
Read more >Definition of multi-GPU - PCMag
(MULTIple-Graphics Processing Units) Using two or more graphics cards in the same PC to support faster animation in video games.
Read more >What about multi-GPU support? - Folding@home
What about multi-GPU support? Yes, both cards can be utilized for Folding@home. Each GPU card will be given a separate Work Unit to...
Read more >5 Reasons Multi-GPU Gaming Setups Are Done for Good
As said earlier, GPUs have become so powerful that you no longer need dual GPUs or more to support Ultra Settings in games....
Read more >Multi GPU: An In-Depth Look - Run:AI
These units can provide significant advantages over traditional CPUs, including up to 10x more speed. Typically, multiple GPUs are built into a system...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi, sorry for getting back late. Add allow_skip_exp=false to the command similar to https://github.com/r-three/t-few/blob/master/configs/t011b.json in order to run MultiGPU training.
I am sorry for getting back late. I don’t think I can fully resolve the problem. One thing is even though you have 4 GPUs, I think only 0 and 1 are used. May be export CUDA_VISIBLE_DEVICES=0,1,2,3.
The deepspeed, torch, and cuda versions in the requirements worked for us on A100s, A5000, and A6000 GPUs. I am not sure about other GPUs. Maybe @HaokunLiu can help?