Train loss vs loss on progress bar
See original GitHub issue❓ Questions and Help
What is your question?
I don’t understand why train_loss
is different than loss
even though I assign the same value. Perhaps one loss is calculated over the whole dataset and the other one is only for a recent batch? But if that’s the case, which one is which?
Epoch 1: 79%|███████▉ | 691/870 [00:07<00:01, 100.85batch/s, accuracy=0.7, batch_idx=690, gpu=0, loss=0.442, train_loss=0.535, v_num=2]
Code
def training_step(self, batch, batch_idx):
...
loss = self.loss(...)
tqdm_dict = {'train_loss': loss}
outputs = {
'loss': loss,
'progress_bar': tqdm_dict,
'log': tqdm_dict
}
What’s your environment?
- OS: Ubuntu
- Packaging: pip
- Version: 0.6.0
Issue Analytics
- State:
- Created 3 years ago
- Reactions:7
- Comments:14 (10 by maintainers)
Top Results From Across the Web
Train loss vs loss on progress bar · Issue #1238 · Lightning ...
Initially, I thought that one of them could be mean loss over the whole training set, whereas the other one could be the...
Read more >Training models with a progress bar | by Adam Oudad
Track loss and accuracy. In this section, we use a neural network wrote in PyTorch and train it using tqdm to display the...
Read more >Training models with a progress bar - (Machine) Learning log.
Track loss and accuracy. In this section, we use a neural network wrote in PyTorch and train it using tqdm to display the...
Read more >Configuring a progress bar while training for Deep Learning
A tqdm progress bar is useful when used with an iterable, and you don't appear to be doing that. Or rather, you gave...
Read more >Customize the progress bar - PyTorch Lightning
Lightning supports two different types of progress bars (tqdm and rich). ... main progress: shows training + validation progress combined.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
loss on progress bar is a running average. what you return (train_loss) is not
@Borda can we note this in the docs?