question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Q] Sweep with each sweep run on multi-gpu DDP

See original GitHub issue

Hi, I am using sweep, but each sweep training run is on 4 GPUs in a DDP training mode.

I need to have access to wandb.config in all 4 processes, in order to update the hyperparameters given by the sweep server to all 4 processes of each training run.

I do understand that according to ur documentation on DDP, you recommend 2 ways to do DDP with wandb. I am currently doing the first method, which is only logging on first process. But this does not work in sweep as I need my other 3 processes to be able to access the wandb config (as explained above). How will second method work with sweep then? If I do wandb.init on the other 3 processes, will it still get the same set of hyperparams as the main process, as part of the sweep?

Thanks!

Don’t think it is important, but in case it helps, I am using PyTorch, mmdetection framework.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
ramit-wandbcommented, Jul 19, 2022

Yup, they should work. You want to make sure you broadcast config and you communicate the logs to rank 0.

0reactions
ramit-wandbcommented, Jul 25, 2022

Hi @levan92,

I’m closing this issue out. In case you need further assistance here, feel free to reply and we can continue the conversation.

Read more comments on GitHub >

github_iconTop Results From Across the Web

[Q] Sweep with each sweep run on multi-gpu DDP · Issue #3883
Hi, I am using sweep, but each sweep training run is on 4 GPUs in a DDP training mode. I need to have...
Read more >
Sweep in DDP mode - W&B Help - WandB community
Hey there, to use sweeps in multi-gpu setup you need to do the following: Specify the hyperparameters you're sweeping over in a YAML...
Read more >
GPU training (Intermediate) - PyTorch Lightning - Read the Docs
Lightning supports multiple ways of doing distributed training. If you request multiple GPUs or nodes without setting a mode, DDP Spawn will be...
Read more >
A Performance Analysis of Parallel Differential Dynamic ...
parallel forward simulation on each of the Mf blocks. Parallel DDP (Algorithm 1) combines the instruction-level parallelizations, forward sweep, Mf multiple ...
Read more >
Multiple GPU Support — NVIDIA DALI 1.20.0 documentation
Production grade solutions now use multiple machines with multiple GPUs to run the training of neural networks in reasonable time.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found