Allow callbacks to access internal variables of training loop
See original GitHub issue🚀 Feature
Internal variables (batch, predictions, etc) of the training loop (training + validation step) should be made transparent to callbacks. Right now, there is no way to access these internal variables of the training loop through callbacks without making them as attributes of lightning module. This doesn’t sound optimal as it pollutes the lightning module with non-essential code.
Motivation
Use case: Visualize images and predictions from a training batch. Right now, there are two ways:
-
Add a
log
method in pl module and call this fromtraining_step
method of pl module. By doing this, we are essentially polluting pl module with non-essential code. -
Write a visualization callback. As of now, callback has access to pl module and trainer but still, it can’t access the variables (images, predictions, etc) in the training step. We can make these variables as attributes of pl module but then updating these attributes in every training step (so that callback can access it) would also count as “non-essential code” which would defeat the point of the callback. It also spoils the neat separation between callbacks and pl module as we’ll be caching/updating attributes in pl module even if we switch off the callback.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:7
- Comments:27 (13 by maintainers)
we can offer some read-only properties…
Has this been resolved? I am trying to implement a callback for visualization and currently facing the same issue.