question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ReduceLROnPlateau LRScheduler error

See original GitHub issue

Hi,

I tried to use the LRScheduler handler for ReduceLROnPlateau from torch.optim and it threw an a TypeError:

plateau_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer, mode='min', factor=0.5, patience=1)
ignite_plateau_scheduler = LRScheduler(plateau_scheduler)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-13-56cb18e5af57> in <module>
      1 plateau_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer, mode='min', factor=0.5, patience=1)
----> 2 ignite_plateau_scheduler = LRScheduler(plateau_scheduler)

/net/vaosl01/opt/NFS/su0/anaconda3/envs/nlpbook/lib/python3.7/site-packages/pytorch_ignite-0.2.0-py3.7.egg/ignite/contrib/handlers/param_scheduler.py in __init__(self, lr_scheduler, save_history, **kwds)
    424         if not isinstance(lr_scheduler, _LRScheduler):
    425             raise TypeError("Argument lr_scheduler should be a subclass of torch.optim.lr_scheduler._LRScheduler, "
--> 426                             "but given {}".format(type(lr_scheduler)))
    427 
    428         if len(lr_scheduler.optimizer.param_groups) > 1:

TypeError: Argument lr_scheduler should be a subclass of torch.optim.lr_scheduler._LRScheduler, but given <class 'torch.optim.lr_scheduler.ReduceLROnPlateau'>

The example from the documentation with StepLR works without problems. It seems that the objects belong to different base classes. This is indeed true as can be from the source here. StepLR inherits from _LRScheduler, while ReduceLROnPlateau does not have a base class (inherits from object). Is there a way to bypass that check for this scheduler?

Thanks.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:1
  • Comments:8

github_iconTop GitHub Comments

5reactions
ismael-elatificommented, Feb 2, 2021

I wrote a wrapper of torch.optim.lr_scheduler.ReduceLROnPlateau and put it here as it might be useful to others.

from torch.optim.lr_scheduler import ReduceLROnPlateau


class ReduceLROnPlateauScheduler:
    """Wrapper of torch.optim.lr_scheduler.ReduceLROnPlateau with __call__ method like other schedulers in 
        contrib\handlers\param_scheduler.py"""

    def __init__(
            self,
            optimizer,
            metric_name,
            mode='min', factor=0.1, patience=10,
            threshold=1e-4, threshold_mode='rel', cooldown=0,
            min_lr=0, eps=1e-8, verbose=False,
    ):
        self.metric_name = metric_name
        self.scheduler = ReduceLROnPlateau(optimizer, mode=mode, factor=factor, patience=patience,
                                           threshold=threshold, threshold_mode=threshold_mode, cooldown=cooldown,
                                           min_lr=min_lr, eps=eps, verbose=verbose)

    def __call__(self, engine, name=None):
        self.scheduler.step(engine.state.metrics[self.metric_name])

    def state_dict(self):
        return self.scheduler.state_dict()

# usage example
optimizer = torch.optim.Adam(model.parameters())
scheduler = ReduceLROnPlateauScheduler(optimizer, metric_name="my_metric", mode="max")
evaluator.add_event_handler(Events.COMPLETED, scheduler)
4reactions
sudarshan85commented, Mar 22, 2019

Ok thanks. I just started using ignite today. Thats why I’m having some basic questions. Thanks for your help.

Read more comments on GitHub >

github_iconTop Results From Across the Web

ReduceLROnPlateau — PyTorch 1.13 documentation
This scheduler reads a metrics quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced....
Read more >
optim.lr_scheduler.ReduceLROnPlateau gives error value ...
I'm using gpu tensors, eg: Variable(torch.from_numpy(X).type(torch.FloatTensor).cuda(), requires_grad=False). If I cast it to the cpu like ...
Read more >
tf.keras.callbacks.ReduceLROnPlateau | TensorFlow v2.11.0
This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.
Read more >
PyTorch LR Scheduler - Adjust The ... - Python Engineer
In this PyTorch Tutorial we learn how to use a Learning Rate (LR) Scheduler to adjust the LR during training.
Read more >
LightningModule - PyTorch Lightning - Read the Docs
ReduceLROnPlateau scheduler, Lightning requires that the ... If an LR scheduler is specified for an optimizer using the lr_scheduler key in the above...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found