question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Allow callbacks to access internal variables of training loop

See original GitHub issue

🚀 Feature

Internal variables (batch, predictions, etc) of the training loop (training + validation step) should be made transparent to callbacks. Right now, there is no way to access these internal variables of the training loop through callbacks without making them as attributes of lightning module. This doesn’t sound optimal as it pollutes the lightning module with non-essential code.

Motivation

Use case: Visualize images and predictions from a training batch. Right now, there are two ways:

  1. Add a log method in pl module and call this from training_step method of pl module. By doing this, we are essentially polluting pl module with non-essential code.

  2. Write a visualization callback. As of now, callback has access to pl module and trainer but still, it can’t access the variables (images, predictions, etc) in the training step. We can make these variables as attributes of pl module but then updating these attributes in every training step (so that callback can access it) would also count as “non-essential code” which would defeat the point of the callback. It also spoils the neat separation between callbacks and pl module as we’ll be caching/updating attributes in pl module even if we switch off the callback.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:7
  • Comments:27 (13 by maintainers)

github_iconTop GitHub Comments

6reactions
Bordacommented, Apr 20, 2020

we can offer some read-only properties…

4reactions
JohnnyRiskcommented, Sep 9, 2020

Has this been resolved? I am trying to implement a callback for visualization and currently facing the same issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Allow callbacks to access internal variables of training loop
Write a visualization callback. As of now, callback has access to pl module and trainer but still, it can't access the variables (images, ......
Read more >
Callbacks - Hugging Face
Callbacks are objects that can customize the behavior of the training loop in the PyTorch Trainer (this feature is not yet implemented in...
Read more >
Callback — PyTorch Lightning 1.8.5.post0 documentation
Lightning supports registering Trainer callbacks directly through Entry Points. Entry points allow an arbitrary package to include callbacks that the Lightning ...
Read more >
Writing your own callbacks | TensorFlow Core
Callbacks are useful to get a view on internal states and statistics of the model during training. You can pass a list of...
Read more >
Callbacks in neural networks. Injecting changes to your models
Callbacks give us a great way to inject new code into training loop without touching source code of the training loop.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found