Is validation loss computed and output ?
See original GitHub issueThank you for your great work. I’d like to ask you a small question. While I can find evaluation scores such as mIoU, I cannot find validation loss anywhere (on tensorboard, standard output, log.json etc.).
- Is validation loss not computed ?
- Is it computed but not output by default (so can I output validation loss somehow by changing config ?)
- Is it computed and output but do I simply miss it ?
I used the following config.
python tools/train.py configs/deeplabv3plus/deeplabv3plus_r50-d8_512x1024_80k_cityscapes.py
I set
workflow = [('train', 10), ('val', 1)]
evaluation = dict(interval=2000, metric='mIoU'),
where 1 epoch = 300 iterations.
Thanks for any help.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:12
Top Results From Across the Web
Validation loss - neural network - Data Science Stack Exchange
Validation loss is the same metric as training loss, but it is not used to update the weights. It is calculated in the...
Read more >Training and Validation Loss in Deep Learning - Baeldung
The validation loss is similar to the training loss and is calculated from a sum of the errors for each example in the...
Read more >Your validation loss is lower than your training loss? This is why!
Reason 3: Training loss is calculated during each epoch, but validation loss is calculated at the end of each epoch. Remember that each...
Read more >Why is my validation loss lower than my training loss?
Your training loss is continually reported over the course of an entire epoch; however, validation metrics are computed over the validation set ......
Read more >How to compute the validation loss? (Simple linear regression)
The validation loss is a flat line. It is not what I want. python · deep-learning · neural-network · pytorch · linear-regression.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hello @rubeea Sorry for late reply. I have been crazily busy this week.
Thank you for your comment. Yes, I have changed the workflow to include val.
Unfortunately, I have not encountered a similar problem when setting [(‘train’, 1)].
[(‘train’, 1)] and [(‘train’, 1), (‘val’, 1)] both worked in my case.
For the original problem in this issue, that is, to output validation loss in the tensorboard, today I found a workaround.
In mmseg/models/segmentors/base.py, validation loss is calculated in def val_step.
https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/segmentors/base.py#L162
To show loss in tensorboard, we need the key ‘log_vars’ in the output dictionary. This key exists in train output (in def train_step), but not in val output. That is why the val loss is not shown in tensorboard, I suppose. So, I simply mimic the def train_step. I added the following after
output = self(**data_batch, **kwargs)
in def val_step.I slightly changed the name by adding prefix ‘val_’ in the keys, otherwise I think the val loss is not distinguished from train loss in the tensorboard. In my case, this workaround worked and the val loss is shown on the tensorboard. (One unsatisfactory point is that the val loss is shown in ‘train’ tab… This is ugly but is not a problem practically.) I hope this helps.
Hi,
Actually you are right those are indeed the training data losses while the metrics are being computed on the validation dataset. Kindly report the solution here if you find a workaround. Thanks 😃