Training loss is not being logged for pytorch-lightning version greater 0.8.5
See original GitHub issueHi, when using wandb with pytorch-lightning I experienced that the training loss is not automatically logged anymore. I tried the versions > 0.8.5. The test loss is still being fetched. Since this problem also affects other logging integrations, e.g. Neptune.ai (verified), I am not sure on which side the logging is failing.
Tried code:
from pytorch_lightning.loggers import WandbLogger
wandb.init(project="project")
trainer = pl.Trainer(max_epochs=5, logger=wandb_logger, gpus=0, weights_summary='full')
Happened to anyone else?
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (3 by maintainers)
Top Results From Across the Web
Training loss is not being logged for pytorch-lightning version ...
I tried the versions > 0.8.5. The test loss is still being fetched. Since this problem also affects other logging integrations, e.g. Neptune.ai ......
Read more >Logging — PyTorch Lightning 1.8.5.post0 documentation
The progress bar by default already includes the training loss and version number of the experiment if you are using a logger.
Read more >pytorch-lightning 0.8.5 - PyPI
Research code (goes in the LightningModule). Engineering code (you delete, and is handled by the Trainer). Non-essential research code (logging, ...
Read more >Use PyTorch Lightning with Weights & Biases - Wandb
Train loss and validation loss for the particular run are automatically logged in the dashboard in real time as the model is being...
Read more >Keeping Up with PyTorch Lightning and Hydra — 2nd Edition
The new, simplified logging interface helps you not repeat yourself in metrics logging. In training_step() , I calculate the overall loss and ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

I’m not sure which version you use but the pattern for logging is now with
self.log(see the docs)Hi @FraPochetti! Sorry @borisdayma for not getting back at this. I remember that I tried
self.logbut at that time I was logging a lot, and if I recall correctly, I was only able to log a single variable with it? (Not quite sure)@FraPochetti I ended up with the following:
I think the
lossin the return statement never got logged and I was just a bit careless. I think the trick was usingtensorboard_logsforlog. Let me know if this works.You can also have a look at the full code here: github.com/sneakyPad/decoding-latent-space-rs/blob/master/models/movies_vae.py