How to compute loss during evaluation?
See original GitHub issue❓ Questions and Help
How would you recommend to compute loss during evaluation? for example I set my model to evaluation --> model.eval()
get prediction
with torch.no_grad():
predictions = model(images)
calculate loss between predictions and targets
SOMETHING LIKE THAT
criterion = nn.crossentropyloss()
loss = criterion(predictions, targets)
But above code doesn’t work
Issue Analytics
- State:
- Created 5 years ago
- Reactions:1
- Comments:7 (4 by maintainers)
Top Results From Across the Web
Training and Validation Loss in Deep Learning - Baeldung
Computationally, the training loss is calculated by taking the sum of errors for each example in the training set.
Read more >How to calculate running loss using loss.item() in PyTorch?
you could just sum it and calculate the mean after the epoch finishes or at the end of the epoch, we divide by...
Read more >Training Loss and Testing Loss || Lesson 37 || Machine Learning
We take an example and understand training loss and testing loss. ... Like this calculate the loss for all the values in the...
Read more >How does keras.evaluate() calculate the loss? - Stack Overflow
Snoopy's answer here in the answer section . This has been asked many times before, the loss you see is with changing weights...
Read more >Interpreting Loss Curves | Machine Learning
A plot showing the ideal loss curve when training a machine learning model. The loss. But in reality, loss curves can be quite...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
My goal is to see convergence of loss of train dataset and validation dataset. Something like that. Obviously I can not set model to
train
mode because than my model will learn validation set as well.Is there other way to do it?
You can normalize it to be in the same range. Also, the validation accuracy is expected to go up, while the training loss should go down.