Misleading ValueError - Accuracy Metric Multilabel
See original GitHub issue🐛 Bug description
When using Accuracy.update() with both inputs having the second dimension 1, e.g. in my case torch.Size([256,1])
the raised error message is misleading.
To reproduce
from ignite.metrics import Accuracy
import torch
acc = Accuracy(is_multilabel=True)
acc.update((torch.zeros((256,1)), torch.zeros((256,1))))
ValueError: y and y_pred must have same shape of (batch_size, num_categories, ...).
In this case the y and y_pred do have the same shape but the issue is that it’s not an accepted multilabel input (the and y.shape[1] != 1
in the following code block from _check_shape
in _BaseClassification
). This should be indicated in the error message (or the if statement changed).
What is the argument to not allow a y.shape[1]
of 1?
if self._is_multilabel and not (y.shape == y_pred.shape and y.ndimension() > 1 and y.shape[1] != 1):
raise ValueError("y and y_pred must have same shape of (batch_size, num_categories, ...).")
Environment
- PyTorch Version (e.g., 1.4):
- Ignite Version (e.g., 0.3.0): 0.3.0
- OS (e.g., Linux): Linux
- How you installed Ignite (
conda
,pip
, source): conda - Python version:
- Any other relevant information:
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (4 by maintainers)
Top Results From Across the Web
Classification metrics can't handle a mix of multilabel-indicator ...
ValueError: Classification metrics can't handle a mix of multilabel-indicator and multiclass targets in CNN - Data Science Stack Exchange. ...
Read more >confusion matrix error "Classification metrics can't handle a ...
Confusion matrix needs both labels & predictions as single-digits, not as one-hot encoded vectors; although you have done this with your ...
Read more >Multi Label Model Evaluation | Kaggle
None, the scores for each class are returned. · 'micro': Calculate metrics globally by counting the total true positives, false negatives and false...
Read more >Precision, Recall, Accuracy, and F1 Score for Multi-Label ...
Accuracy can be a misleading metric for imbalanced datasets. Consider the class “dog“ in our toy dataset. Only 1 example in the dataset...
Read more >sklearn.metrics.multilabel_confusion_matrix
Compute class-wise (default) or sample-wise (samplewise=True) multilabel confusion matrix to evaluate the accuracy of a classification, and output confusion ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
We discuss this issue and the behavior of
Accuracy
withis_multilabel=True
in the case ofnum_categories == 1
won’t change. Actually, the casenum_categories == 1
is already covered bybinary
(i.e. simplyis_multilabel=False
here). We want to maintain a clear usage and do not have multiple ways (and maybe bad ones wrt performance) to do the same thing. I agree thatis_multilabel=True
andnum_categories == 1
meansbinary
but our implementation does not fit that.Btw, I can help you about your specific needs. Feel free to share snippets, as you can see, @vfdev-5 is very fast, my challenge is to be faster 😃
Thanks. It’s fine for me to have another if-condition checking if my input should use
binary
.