Performance of batch_size > 1
See original GitHub issueTheoretically batch_size > 1
should be working, practically however performance appears to degrade. I’ve looked at the data generator and loss functions but everything appears to be fine.
I’m not sure where the degradation of performance comes from, perhaps a fresh set of eyes can help uncover the issue? My intuition expects the problem to be in the loss function, or a deeper issue in Keras / Tensorflow perhaps.
In extension, this also breaks multi GPU support since that requires batch_size > 1
.
@awilliamson I think you ran some tests on this right? Do you still have them stored somewhere? Can you share them?
Issue Analytics
- State:
- Created 6 years ago
- Comments:25 (17 by maintainers)
Top Results From Across the Web
How to Control the Stability of Training Neural Networks With ...
Batch size controls the accuracy of the estimate of the error gradient when training neural networks. Batch, Stochastic, and Minibatch gradient ...
Read more >Deep Learning Performance 1 Batch Size, Epochs ... - SROSE
Batch size controls the accuracy of the estimate of the error gradient when training neural networks. There is a tension between batch size...
Read more >The Challenge of Batch Size 1: Groq Adds Responsiveness to ...
However, small batch sizes and batch size 1 introduce a number of performance and responsiveness complexities to machine learning applications, particularly.
Read more >Why Mini-Batch Size Is Better Than One Single ... - Baeldung
With a batch size of 27000, we obtained the greatest loss and smallest accuracy after ten epochs. This shows the effect of using...
Read more >How does Batch Size impact your model learning - Medium
Batch Size is among the important hyperparameters in Machine Learning. ... This makes it pretty clear that increasing batch size lowers performance.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi! I can fire up training with batch size 16 today. Will post here if i see anything odd. Thanks!
Indeed I did. I ensured that the same number of images were trained on, to have a fair comparison.
Yeah I believe so, and I suggest we close this.
If someone has issues with batch_size>1, they can reopen this issue.