question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Precision raises an NonComputableError for all zeros model predictions

See original GitHub issue

🐛 Bug description

Following https://github.com/pytorch/ignite/issues/1991#issuecomment-844860265

from ignite.metrics import Precision
p = Precision()
p.update((torch.zeros(4), torch.randint(0, 2, (4,))))
p.compute()
> NotComputableError: Precision must have at least one example before it can be computed.

which is wrong as we do call update.

This is related to the check

/opt/conda/lib/python3.8/site-packages/ignite/metrics/precision.py in compute(self)
     51         is_scalar = not isinstance(self._positives, torch.Tensor) or self._positives.ndim == 0
     52         if is_scalar and self._positives == 0:
---> 53             raise NotComputableError(
     54                 f"{self.__class__.__name__} must have at least one example before it can be computed."
     55             )

where self._positives is 0 for all zeros predictions but is thought as uninitialized.

I think this should be related to a change in pytorch that previously had torch.tensor([0, 0, 0]).sum() as 1d tensor and now is 0d tensor.

Environment

  • PyTorch Version (e.g., 1.4): 1.8
  • Ignite Version (e.g., 0.3.0): 0.4.4
  • OS (e.g., Linux):
  • How you installed Ignite (conda, pip, source):
  • Python version:
  • Any other relevant information:

cc @liebkne

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:3
  • Comments:8 (2 by maintainers)

github_iconTop GitHub Comments

4reactions
trsvchncommented, May 25, 2021

After reading discussion #335 and some experiments I’d like to propose another solution. It is based on the code proposed in that discussion, but the approach is slightly different. It uses the __new__ magics, so it wraps the methods on instance creation, so no need to call super() and user cannot break it by overriding __init__, reset, update etc.

class Metric(metaclass=ABCMeta):
...
    def __new__(cls, *args, **kwargs):
        """Prevents metric from being computed before updated.
        """
        _reset = cls.reset
        _update = cls.update
        _compute = cls.compute

        def wrapped_reset(self):
            _reset(self)
            self._updated = False

        cls.reset = wraps(cls.reset)(wrapped_reset)

        def wrapped_update(self, output):
            _update(self, output)
            self._updated = True

        cls.update = wraps(cls.update)(wrapped_update)

        def wrapped_compute(self):
            if not self._updated:
                raise NotComputableError(
                    f"{self.__class__.__name__} not updated before compute."
                )
            return _compute(self)

        cls.compute = wraps(cls.compute)(wrapped_compute)

        return super(Metric, cls).__new__(cls, *args, **kwargs)
...
    def __init__(...):
        ...

What do you think?

2reactions
trsvchncommented, May 21, 2021

yes, it’s true

torch.sum with default keepdim=False will squeeze the dim, 1 to 0 dim in our case.

Thus in a corner case withg zeros:

prec.update((torch.zeros(4), ...))

will produce self._positives equals tensor(0) with size of torch.Size([]), ndim=0

and later when checking self._positives == 0 will produce True and raise Error, since

>>> torch.tensor(0) == 0
tensor(True)
Read more comments on GitHub >

github_iconTop Results From Across the Web

Precision Metric: must have at least one example before it can ...
For example, multiclass case, let's assume the predictions to be same for all ... Precision raises an NonComputableError for all zeros model ......
Read more >
Binary classification - Class 1 testing metrics are all zeros
I am working on a binary classification problem for Spam/Not spam emails using Keras and tensorflow. Training accuracy is perfect and the ......
Read more >
arXiv:2210.13741v1 [quant-ph] 25 Oct 2022
class capacity, the model will also increase the function classes until they are rich enough to achieve zero training.
Read more >
Model Predictions are all Tensors Full of Zeros
I'm working to build a prediction model that is able to take general information about the weather and location of an accident to...
Read more >
Getting Accurate Predicted Counts When There Are No Zeros ...
We previously examined why a linear regression and negative binomial regression were not viable models for predicting the expected length of stay in...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found