LRFinder w/ Gradient Accumulation
See original GitHub issueGreat package! Thank you for sharing 😃
- I was wondering if you plan on adding gradient accumulation support for using
LRFinder
with a larger batch size. - Will you be adding mixed precision support?
Issue Analytics
- State:
- Created 4 years ago
- Comments:11 (9 by maintainers)
Top Results From Across the Web
Effective Training Techniques - PyTorch Lightning
Accumulated gradients run K small batches of size N before doing a backward pass. The effect is a large effective batch size of...
Read more >LR finder and small batch sizes - fast.ai Course Forums
One possible solution might be to utilize gradient accumulation to simulate higher batch sizes just for calculating the learning rate. However, ...
Read more >Source code for monai.optimizers.lr_finder
... gradients are not accumulated. non_blocking_transfer: when `True`, moves data to device asynchronously if possible, e.g., moving CPU Tensors with pinned ...
Read more >Performing gradient accumulation with Accelerate
This is done by accumulating gradients over several batches, and only stepping the optimizer after a certain number of batches have been performed....
Read more >LR Finder Using PyTorch - Kaggle
This Python 3 environment comes with many helpful analytics libraries ... also: # https://nvidia.github.io/apex/advanced.html#gradient-accumulation-across- ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The PR from @NaleRaphael is merged. Thanks @rsomani95 for raising the issue.
Hi @rsomani95 . Many thanks for your help and feedback, and I’m glad that these implementation helped!
And it’s quite weird that it takes longer time to run when
torch.backends.cudnn.benchmark = True
. As far as I know, that flag should accelerate training speed when input size is fixed in each iteration.However, it seems to me that it’s not harmful to pend the issue about
torch.backends.cudnn.benchmark
currently. Because it’s not related to LRFinder directly and the use of it depends on user. Though, I’ll keep it in mind!Besides, it seems that apex is going to be integrated as a builtin component of PyTorch in the future. (nvidai/apex#659) I will keep tracking this, too.
@davidtvs Before merging this PR, I would like to add some code to make users able to install
apex
optionally. I’ll leave a comment here when it’s done.Thanks you, guys!