[Q] Sweep with each sweep run on multi-gpu DDP
See original GitHub issueHi, I am using sweep, but each sweep training run is on 4 GPUs in a DDP training mode.
I need to have access to wandb.config in all 4 processes, in order to update the hyperparameters given by the sweep server to all 4 processes of each training run.
I do understand that according to ur documentation on DDP, you recommend 2 ways to do DDP with wandb. I am currently doing the first method, which is only logging on first process. But this does not work in sweep as I need my other 3 processes to be able to access the wandb config (as explained above). How will second method work with sweep then? If I do wandb.init on the other 3 processes, will it still get the same set of hyperparams as the main process, as part of the sweep?
Thanks!
Don’t think it is important, but in case it helps, I am using PyTorch, mmdetection framework.
Issue Analytics
- State:
- Created a year ago
- Comments:6 (3 by maintainers)

Top Related StackOverflow Question
Yup, they should work. You want to make sure you broadcast config and you communicate the logs to rank 0.
Hi @levan92,
I’m closing this issue out. In case you need further assistance here, feel free to reply and we can continue the conversation.