What is the correct way to calculate validation loss?
See original GitHub issueHi!
I was wondering what is the correct way to calculate validation loss when using the ImagenTrainer
without using the valid_step
method. I’m having trouble using that method because I’m not sure what the format of the returned data from the validation data loader should be. (I normally don’t use a training dataloader)
I tried doing:
with torch.no_grad():
# call trainer here
but that doesn’t make sense since it looks like the trainer updates the weights inside of its forward
method.
I’m thinking of trying:
with torch.no_grad():
validation_loss = trainer.imagen(images=foo, texts=bar)
Although that seems like it might be missing some of the ema
things.
Side note, thanks for answering all of my questions. It’s been super helpful 😃
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:8 (4 by maintainers)
Top Results From Across the Web
Validation loss - neural network - Data Science Stack Exchange
It is calculated in the same way - by running the network forward over inputs xi and comparing the network outputs ˆyi with...
Read more >Training and Validation Loss in Deep Learning - Baeldung
The validation set is a portion of the dataset set aside to validate the performance of the model. The validation loss is similar...
Read more >How to compute the validation loss? (Simple linear regression)
In my code, 80 datasets are used for training and 20 datasets are used for validation. In my code, the neural network is...
Read more >Your validation loss is lower than your training loss? This is why!
Lower loss does not always translate to higher accuracy when you also have regularization or dropout in the network. Reason 3: Training loss...
Read more >Training loss VS Validation loss? - Part 1 (2020) - Fast.ai forums
Validation loss is used to ascertain performance on an entire epoch after the training loss has been calculated and weight updates have been ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@lucidrains Oh, and presumably you’d also want to wrap the entire validation loop inside of a
torch.no_grad()
I assume? These questions are kind of obvious, but I’m asking b/c it seems like the trainer takes care of a ton of things, so it might also just take care of turning ontorch.no_grad
(maybe).@vedantroy yup, correct!