question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

"Please add the following callbacks" warning

See original GitHub issue

🐛 Bug

Pytorch Lightning emits a false positive (?) warning when restoring from checkpoint during model tuning with auto_lr_find.

To Reproduce

import os

import torch
from torch.utils.data import DataLoader, Dataset

from pytorch_lightning import LightningModule, Trainer
from pytorch_lightning.callbacks import ModelCheckpoint


class RandomDataset(Dataset):
    def __init__(self, size, length):
        self.len = length
        self.data = torch.randn(length, size)

    def __getitem__(self, index):
        return self.data[index]

    def __len__(self):
        return self.len


class BoringModel(LightningModule):
    def __init__(self, lr=0.1):
        self.save_hyperparameters()
        super().__init__()
        self.layer = torch.nn.Linear(32, 2)

    def forward(self, x):
        return self.layer(x)

    def training_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("train_loss", loss)
        return {"loss": loss}

    def validation_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("valid_loss", loss)

    def test_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("test_loss", loss)

    def configure_optimizers(self):
        return torch.optim.SGD(self.layer.parameters(), lr=self.hparams.lr)

    def train_dataloader(self):
        return DataLoader(RandomDataset(32, 1024), batch_size=2, num_workers=8)

    def val_dataloader(self):
        return DataLoader(RandomDataset(32, 1024), batch_size=2, num_workers=8)

    def test_dataloader(self):
        return DataLoader(RandomDataset(32, 1024), batch_size=2, num_workers=8)



def run():
    model = BoringModel()

    trainer = Trainer(
        default_root_dir=os.getcwd(),
        max_epochs=1,
        auto_lr_find=True,
        callbacks=[ModelCheckpoint()]
    )
    trainer.tune(model, lr_find_kwargs={"early_stop_threshold": None})


if __name__ == "__main__":
    run()

Expected behavior

No warnings

Actual behavior

/usr/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1721: UserWarning: Be aware that when using `ckpt_path`, callbacks used to create the checkpoint need to be provided during `Trainer` instantiation. Please add the following callbacks: ["ModelCheckpoint{'monitor': None, 'mode': 'min', 'every_n_train_steps': 0, 'every_n_epochs': 1, 'train_time_interval': None, 'save_on_train_epoch_end': None}"].

Environment

* CUDA:
	- GPU:
		- NVIDIA GeForce GTX 1050
	- available:         True
	- version:           11.3
* Packages:
	- numpy:             1.22.3
	- pyTorch_debug:     False
	- pyTorch_version:   1.11.0+cu113
	- pytorch-lightning: 1.6.0
	- tqdm:              4.63.2
* System:
	- OS:                Linux
	- architecture:
		- 64bit
		- ELF
	- processor:         
	- python:            3.10.4
	- version:           #1 SMP PREEMPT Mon Mar 28 09:16:36 UTC 2022

cc @otaj @akihironitta @borda @rohitgr7

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
RuRocommented, Apr 21, 2022

I don’t really know, why is this happening or how should this be fixed, to be honest. Also, I am really busy right now, so even if I was familiar with the internals of the checkpointing mechanism, I still wouldn’t be able to contribute to this issue right now.

1reaction
RuRocommented, Apr 26, 2022

@rohitgr7 Sorry, I don’t understand the question. The example code, that I provided, is a full example. I am not passing any checkpoint paths anywhere.

I think, that the checkpoint is created and restored automatically by auto_lr_find during tune.

Read more comments on GitHub >

github_iconTop Results From Across the Web

"Please add the following callbacks" warning #12817 - GitHub
Bug Pytorch Lightning emits a false positive (?) warning when restoring from checkpoint during model tuning with auto_lr_find.
Read more >
Source code for pytorch_lightning.callbacks.model_checkpoint
Please note that the monitors are checked every ``every_n_epochs`` epochs. if ``save_top_k >= 2`` and the callback is called multiple times inside an...
Read more >
How to fix to avoid ESLint Effect callbacks warning in React
ESLint: Effect callbacks are synchronous to prevent race conditions. Put the async function inside: (react-hooks/exhaustive-deps).
Read more >
JavaScript callbacks — Bokeh 2.4.3 Documentation
callback property specifically for executing CustomJS in response to specific events or situations. Warning. The callbacks described below were added early to ...
Read more >
Node.js v19.3.0 Documentation
Function arguments; Callbacks; Object factory; Function factory; Wrapping C++ objects; Factory of wrapped objects; Passing wrapped objects around. Node-API.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found