question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Multi-GPU VarNet training and batch size

See original GitHub issue

My understanding is that the current VarNet training code uses batch size of 1 per GPU. Therefore in the multi-GPU training scenario the effective batch size would be num_gpus * batch_size = num_gpus as the gradients are averaged between GPUs after the backward pass.

According to the paper (and what I can also see in the code) the learning rate is set to 0.0003, but there is no mention of the (effective) batch size used in the experiment. Since the learning rate typically has to be adjusted to the batch size it would be good to know what batch size was used in the experiments (that is how many GPUs used). I expect changes in final SSIM on validation/test set with varying number of GPUs if the learning rate is kept at 0.0003.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:7 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
z-fabiancommented, Aug 15, 2020

VarNet only works with batch size 1, that is per GPU. This is because the slices have varying size. If you want to increase mini-batch size, that is number of training examples averaged per gradient update just increase the number of GPUs used for training.

0reactions
zhan4817commented, Aug 15, 2020

However, when I tried to train VarNet with batch_size = 2, it shows the error in the validation sanity check:

… 237, in get_low_frequency_lines while mask[…, r, :]: RuntimeError: bool value of Tensor with more than one value is ambiguous

Do you have any idea? Thanks!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Crossbow: Scaling Deep Learning with Batch Sizes on Multi ...
They process a batch of training data at a time, partitioned across GPUs, and average the resulting gradients to obtain an updated global...
Read more >
Effect of batch size and number of GPUs on model accuracy
In the case of multiple GPUs, the rule of thumb will be using at least 16 (or so) batch size per GPU, given...
Read more >
Efficient Training on Multiple GPUs - Hugging Face
DP splits the global data batch size into mini-batches, so if you have a DP degree of 4, a global batch size of...
Read more >
Training with multiple GPUs - NVIDIA Documentation Center
Batch Size - The value of batch size is constrained by the GPU memory. You have to choose a batch size that is...
Read more >
13.5. Training on Multiple GPUs - Dive into Deep Learning
We first use a batch size of 256 and a learning rate of 0.2. pytorchmxnet. train(num_gpus= ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found