question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

eval_mode giving inconsistent results

See original GitHub issue

Describe the bug Some networks give different results each time they’re called with eval_mode.

I suspect this might be the cause of https://github.com/Project-MONAI/MONAI/issues/1710.

To Reproduce

import torch
from monai.networks.utils import eval_mode

model = torch.nn.BatchNorm2d(1)
im = torch.randn((1, 1, 2, 2))
for _ in range(5):
    res_train = model(im).flatten().detach()
    with eval_mode(model):
        res_eval = model(im).flatten()
    print(res_train)
    print(res_eval)

Example output, notice how odd rows all match (train), and even rows differ (eval):

tensor([ 0.7991,  0.5358, -1.7123,  0.3774])
tensor([ 0.7824,  0.4933, -1.9753,  0.3193])
tensor([ 0.7991,  0.5358, -1.7123,  0.3774])
tensor([ 0.7689,  0.4882, -1.9086,  0.3193])
tensor([ 0.7991,  0.5358, -1.7123,  0.3774])
tensor([ 0.7579,  0.4842, -1.8531,  0.3195])
tensor([ 0.7991,  0.5358, -1.7123,  0.3774])
tensor([ 0.7489,  0.4810, -1.8064,  0.3198])
tensor([ 0.7991,  0.5358, -1.7123,  0.3774])
tensor([ 0.7413,  0.4784, -1.7667,  0.3201])

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:7 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
wylicommented, Mar 10, 2021

I had a quick test it seems we shouldn’t use eval_mode in the vis, we want eval() but without torch.no_grad()

https://github.com/Project-MONAI/MONAI/blob/master/monai/visualize/class_activation_maps.py#L122

0reactions
wylicommented, Mar 10, 2021

anyway eval_mode itself is ok, I’m closing this ticket

Read more comments on GitHub >

github_iconTop Results From Across the Web

Evaluation giving inconsistent results · Issue #739 - GitHub
While writing a custom model (to train on MS COCO), I had set TEST.EVAL_PERIOD to 5000 so that I could frequently evaluate the...
Read more >
PyTorch Python && C++ inconsistent results from shared model
I am running a Python3 trained, torch_script exported CNN with C++. However, I can not reproduce results on testing data.
Read more >
Performance highly degraded when eval() is activated in the ...
I am doing some experiments about regression problem using pytorch. (e.g., input a noised image and output a denoised image). 1.
Read more >
Bag of Tricks for Adversarial Training - OpenReview
used in these methods are highly inconsistent. ... Consequently, this paper also gives a few takeaways based on the results that they find....
Read more >
Mastering TorchScript: Tracing vs Scripting, Device Pinning ...
Also, be sure to trace in eval mode if you are exporting a model for ... Tracing will now result in TorchScript that...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found