FastaiLRFinder does not run more than 1 epoch. Why?
See original GitHub issueI have been trying to use the FastaiLRFinder
to find the best learning rate for my module.
If we create the trainer with the function create_supervised_trainer
as below:
trainer = create_supervised_trainer(
model, optimizer, criterion, device, output_transform=custom_output_transform
)
and run it:
with lr_finder.attach(
trainer,
to_save=to_save,
num_iter=50,
end_lr=1.,
step_mode='exp') as lr_finder_training:
lr_finder_training.run(train_loader)
A warning will come up say: “UserWarning: Desired num_iter 50 is unreachable with the current run setup of 15 iteration (1 epochs) from ignite.contrib.handlers.param_scheduler import (LRScheduler, PiecewiseLinear)”
My dataloader has 15 batches to iterate, which means FastaiLRFinder does not allow you to run more than 1 epoch. Why?
According to their source code, we can see here that this verification limits the user to run more iterations than the maximum of epochs in their dataloader.
But why? Am I missing something important here?
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
FastaiLRFinder does not allow running more than 1 epoch
My dataloader has 15 batches to iterate, which means FastaiLRFinder does not allow you to run more than 1 epoch. Why?
Read more >Why do my earlier epochs take longer than subsequent epochs?
The simplest and most intuitive reason I could think of for early epochs taking more than than latter ones, is that for your...
Read more >Running one epoch at a time or all at once? - Fast.ai forums
Hello everyone, Imagine if I want to run 10 epochs in my model. I could run all of them using the same function,...
Read more >How to do time profiling | PyTorch-Ignite
Learn how to get the time breakdown for individual epochs during training, ... We can print the results of the profiler in the...
Read more >ignite.contrib.handlers.lr_finder — ignite master documentation
[docs]class FastaiLRFinder: """Learning rate finder handler for ... on how well the network can be trained over a range of learning rates and...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@vfdev-5 Yes that’s a good one, I can work on it, thanks!
@KickItLikeShika you have worked on LR finder recently. Can I assign this issue to you to improve it ?