question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Are there any build in multiclass accuracy methods?

See original GitHub issue

I’m new to ignite, and I want to check are there any build in multiclass accuracy methods? Like giving each class’s accuracy? I don’t see there is one in the doc and the multilabel option for Accuracy seems not working for me.

def output_converter(output):
    y_pred, y = output
    y_pred_result = torch.zeros(y_pred.shape[0], y_pred.shape[1])
    y_result = torch.zeros(y_pred.shape[0], y_pred.shape[1])
    _, preds_temp = torch.max(y_pred, 1)
    for i in range(preds_temp.shape[0]):
        y_pred_result[i][preds_temp[i]] = 1
        y_result[i][y[i]] = 1
    return (y_pred_result, y_result)
temp = ignite.metrics.Accuracy(output_transform=output_converter, is_multilabel=True)

And this is how I used the mutlilabel option, but it gives me a summed up accuracy instead of accuracy for each class, what should I do to fix this?

Thanks!

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
CDWJustincommented, Apr 25, 2020

Thanks so much! Problem solved! I’ll close the issue then!

1reaction
sdesroziscommented, Apr 24, 2020

@CDWJustin thank you for this question!

You are right, actually there is no such built-in metric. I discussed exactly the point with @vfdev-5 few weeks ago.

However, as you did, it’s possible to use output_transform to create class-wise metrics

def get_single_label_output_fn(c):
    def wrapper(output):
        y_pred, y = output["y_pred"], output["y"]
         
return y_pred[:, c], y[:, c]
    return wrapper

for i in range(config.num_classes):
    for name, cls in zip(["Accuracy", "Precision", "Recall"], [Accuracy, Precision, Recall]):
        val_metrics["{}/{}".format(name, i)] = cls(output_transform=get_single_label_output_fn(i))

We have an ongoing PR about nested metrics #968 (related to issue #959). It should help a lot to design a smart answer to your question.

HTH

Read more comments on GitHub >

github_iconTop Results From Across the Web

Comprehensive Guide on Multiclass Classification Metrics
To be bookmarked for LIFE: all the multiclass classification metrics you need neatly explained: precision, recall, F1 score, ROC AUC score, ...
Read more >
In multi-class, is the average accuracy of each class in ...
Intuitively accuracy represents the proportion of correct predictions among all the instances.
Read more >
Tips and Tricks for Multi-Class Classification - Medium
This tutorial will show you some tips and tricks to improve your multi-class ... There are various methods to do this, including subsampling...
Read more >
Multi-Class Classification Using New PyTorch Best Practices ...
The demo accuracy() function computes an overall accuracy for all three classes. In a non-demo scenario, it's often a good idea to compute...
Read more >
On methods for improving the accuracy of multi-class ...
to improve accuracy in multi-class classification problems. ... Often there are situations when the dataset number of examples of some.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found