Log evaluation loss while testing model
See original GitHub issueI would love to see the loss for the test set while evaluating the mode during training. I have modified code to return loss when testing,
class SSDDetector(nn.Module):
def __init__(self, cfg):
super().__init__()
self.cfg = cfg
self.backbone = build_backbone(cfg)
self.box_head = build_box_head(cfg)
def forward(self, images, targets=None):
features = self.backbone(images)
detections, detector_losses = self.box_head(features, targets)
if self.training:
return detector_losses
return detections, detector_losses
However, detector_losses are empty during evaluation or validation. Your suggestions will be most helpful.
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (5 by maintainers)
Top Results From Across the Web
Intuition behind Log-loss score - Towards Data Science
In order to evaluate a model and summarize its skill, log-loss score of the classification model is reported as average of log-losses of...
Read more >model.evaluate() gives a different loss on training data from ...
When I tried a deeper network, I can achieve a high performance (a small loss given during the training process) on training data,...
Read more >Evaluating on training data gives different loss - Cross Validated
On the other hand, the testing loss for an epoch is computed using the model as it is at the end of the...
Read more >Interpreting Loss Curves | Machine Learning
A plot showing the ideal loss curve when training a machine learning model. The loss. But in reality, loss curves can be quite...
Read more >Training and evaluation with the built-in methods - TensorFlow
The returned history object holds a record of the loss values and metric values ... We evaluate the model on the test data...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Finally, I can get validation loss after the following edit. The reason was that, network outputs normalized bounding box and dataloader outputs (target label) was not normalized (value of target cords was too high). Therefore error value is quite large for regression loss.
Following are the additional changes in https://github.com/lufficc/SSD/blob/50373c79b861d5d239be4206fafc6661cea040b4/ssd/data/transforms/__init__.py#L20
And validation loss looks good,
SSD won’t compute the losses when testing since it’s unnecessary and time consuming. But if you just want to see, here are some suggestions:
compute targets when testing by pass
target_transform
: https://github.com/lufficc/SSD/blob/master/ssd/data/build.py#L32remember pass to model: https://github.com/lufficc/SSD/blob/master/ssd/engine/inference.py#L43
compute loss when testing here https://github.com/lufficc/SSD/blob/master/ssd/modeling/box_head/box_head.py#L39 follows https://github.com/lufficc/SSD/blob/master/ssd/modeling/box_head/box_head.py#L31-L34