question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Log evaluation loss while testing model

See original GitHub issue

I would love to see the loss for the test set while evaluating the mode during training. I have modified code to return loss when testing,

class SSDDetector(nn.Module):
    def __init__(self, cfg):
        super().__init__()
        self.cfg = cfg
        self.backbone = build_backbone(cfg)
        self.box_head = build_box_head(cfg)

    def forward(self, images, targets=None):
        features = self.backbone(images)
        detections, detector_losses = self.box_head(features, targets)
        if self.training:
            return detector_losses
        return detections, detector_losses

However, detector_losses are empty during evaluation or validation. Your suggestions will be most helpful.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
priteshgohilcommented, Sep 14, 2020

Finally, I can get validation loss after the following edit. The reason was that, network outputs normalized bounding box and dataloader outputs (target label) was not normalized (value of target cords was too high). Therefore error value is quite large for regression loss.

Following are the additional changes in https://github.com/lufficc/SSD/blob/50373c79b861d5d239be4206fafc6661cea040b4/ssd/data/transforms/__init__.py#L20

    else:
        transform = [
            ConvertFromInts(), # Convert img to float32
            ToPercentCoords(), # Normalize BBox Cords
            Resize(cfg.INPUT.IMAGE_SIZE),
            SubtractMeans(cfg.INPUT.PIXEL_MEAN),
            ToTensor()
        ]

And validation loss looks good,

loss

1reaction
lufficccommented, Sep 2, 2020

SSD won’t compute the losses when testing since it’s unnecessary and time consuming. But if you just want to see, here are some suggestions:

  1. compute targets when testing by pass target_transform: https://github.com/lufficc/SSD/blob/master/ssd/data/build.py#L32

  2. remember pass to model: https://github.com/lufficc/SSD/blob/master/ssd/engine/inference.py#L43

  3. compute loss when testing here https://github.com/lufficc/SSD/blob/master/ssd/modeling/box_head/box_head.py#L39 follows https://github.com/lufficc/SSD/blob/master/ssd/modeling/box_head/box_head.py#L31-L34

Read more comments on GitHub >

github_iconTop Results From Across the Web

Intuition behind Log-loss score - Towards Data Science
In order to evaluate a model and summarize its skill, log-loss score of the classification model is reported as average of log-losses of...
Read more >
model.evaluate() gives a different loss on training data from ...
When I tried a deeper network, I can achieve a high performance (a small loss given during the training process) on training data,...
Read more >
Evaluating on training data gives different loss - Cross Validated
On the other hand, the testing loss for an epoch is computed using the model as it is at the end of the...
Read more >
Interpreting Loss Curves | Machine Learning
A plot showing the ideal loss curve when training a machine learning model. The loss. But in reality, loss curves can be quite...
Read more >
Training and evaluation with the built-in methods - TensorFlow
The returned history object holds a record of the loss values and metric values ... We evaluate the model on the test data...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found