ReduceLROnPlateau LRScheduler error
See original GitHub issueHi,
I tried to use the LRScheduler handler for ReduceLROnPlateau from torch.optim and it threw an a TypeError:
plateau_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer, mode='min', factor=0.5, patience=1)
ignite_plateau_scheduler = LRScheduler(plateau_scheduler)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-56cb18e5af57> in <module>
1 plateau_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer, mode='min', factor=0.5, patience=1)
----> 2 ignite_plateau_scheduler = LRScheduler(plateau_scheduler)
/net/vaosl01/opt/NFS/su0/anaconda3/envs/nlpbook/lib/python3.7/site-packages/pytorch_ignite-0.2.0-py3.7.egg/ignite/contrib/handlers/param_scheduler.py in __init__(self, lr_scheduler, save_history, **kwds)
424 if not isinstance(lr_scheduler, _LRScheduler):
425 raise TypeError("Argument lr_scheduler should be a subclass of torch.optim.lr_scheduler._LRScheduler, "
--> 426 "but given {}".format(type(lr_scheduler)))
427
428 if len(lr_scheduler.optimizer.param_groups) > 1:
TypeError: Argument lr_scheduler should be a subclass of torch.optim.lr_scheduler._LRScheduler, but given <class 'torch.optim.lr_scheduler.ReduceLROnPlateau'>
The example from the documentation with StepLR works without problems. It seems that the objects belong to different base classes. This is indeed true as can be from the source here. StepLR inherits from _LRScheduler, while ReduceLROnPlateau does not have a base class (inherits from object). Is there a way to bypass that check for this scheduler?
Thanks.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:8
Top Results From Across the Web
ReduceLROnPlateau — PyTorch 1.13 documentation
This scheduler reads a metrics quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced....
Read more >optim.lr_scheduler.ReduceLROnPlateau gives error value ...
I'm using gpu tensors, eg: Variable(torch.from_numpy(X).type(torch.FloatTensor).cuda(), requires_grad=False). If I cast it to the cpu like ...
Read more >tf.keras.callbacks.ReduceLROnPlateau | TensorFlow v2.11.0
This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.
Read more >PyTorch LR Scheduler - Adjust The ... - Python Engineer
In this PyTorch Tutorial we learn how to use a Learning Rate (LR) Scheduler to adjust the LR during training.
Read more >LightningModule - PyTorch Lightning - Read the Docs
ReduceLROnPlateau scheduler, Lightning requires that the ... If an LR scheduler is specified for an optimizer using the lr_scheduler key in the above...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

I wrote a wrapper of torch.optim.lr_scheduler.ReduceLROnPlateau and put it here as it might be useful to others.
Ok thanks. I just started using ignite today. Thats why I’m having some basic questions. Thanks for your help.