Make LRScheduler attachable to Events.ITERATION_STARTED
See original GitHub issue🚀 Feature
Currently, the correct way to use LRScheduler
wrapper for pytorch >= 1.1.0 is the following:
from torch.optim.lr_scheduler import StepLR
torch_lr_scheduler = StepLR(optimizer, step_size=3, gamma=0.1)
scheduler = LRScheduler(torch_lr_scheduler)
@trainer.on(Events.ITERATION_COMPLETED)
def print_lr():
print(optimizer.param_groups[0]["lr"])
# In this example, we assume to have installed PyTorch>=1.1.0
# (with new `torch.optim.lr_scheduler` behaviour) and
# we attach scheduler to Events.ITERATION_COMPLETED
# instead of Events.ITERATION_STARTED to make sure to use
# the first lr value from the optimizer, otherwise it is will be skipped:
trainer.add_event_handler(Events.ITERATION_COMPLETED, scheduler)
trainer..run([0] * 8, max_epochs=1)
0.1
0.1
0.1
0.010
0.010
0.010
0.001
0.001
however, other schedulers should be used as following (link)
milestones_values = [(1, 1.0), (3, 0.8), (5, 0.2)]
scheduler = PiecewiseLinear(
optimizer, "lr", milestones_values=milestones_values)
trainer.add_event_handler(Events.ITERATION_STARTED, scheduler)
@trainer.on(Events.ITERATION_COMPLETED)
def print_lr():
print(optimizer.param_groups[0]["lr"])
trainer.run([0] * 6, max_epochs=1)
The idea is to improve LRScheduler
such that we could attach it to Events.ITERATION_STARTED
and have a coherent API. It will be a BC-breaking change, but for good.
So, desired example using LRScheduler
should be:
from torch.optim.lr_scheduler import StepLR
torch_lr_scheduler = StepLR(optimizer, step_size=3, gamma=0.1)
scheduler = LRScheduler(torch_lr_scheduler)
trainer.add_event_handler(Events.ITERATION_STARTED, scheduler)
@trainer.on(Events.ITERATION_COMPLETED)
def print_lr():
print(optimizer.param_groups[0]["lr"])
trainer.run([0] * 8, max_epochs=1)
Currently, this gives a wrong behaviour as the first 0.1 wasn’t consumed by the training step.
0.1
0.1
0.010
0.010
0.010
0.001
0.001
The idea could be to retain the first value and reapply it once and then keep everything as it is now.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:13 (8 by maintainers)
Top Results From Across the Web
PyTorch Ignite Tutorial— Classifying Tiny ImageNet with ...
Epoch started/completed; Batch iteration started/completed. With the help of decorators, we can create custom codes known as event handlers.
Read more >LRScheduler — PyTorch-Ignite v0.4.10 Documentation
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@yuta0821 yes, sure! Thanks
@vfdev-5 OK, I understand. I check if your code works and it works well.