question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Average precision metric for binary classification

See original GitHub issue

Is your feature request related to a problem? Please describe. There is ROC AUC in ComputeModelStatistics, but at the same time Average Precision (areaUnderPR) is absent.

Describe the solution you’d like It will be awesome to add it as it is very useful for many binary classification tasks.

Additional context None.

AB#1789611

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
mhamilton723commented, May 18, 2022

@imatiach-msft i think area under PR is much better for unbalanced tasks

0reactions
denmorozcommented, Aug 1, 2022

@ppruthi 👋 Sorry, it is still unclear to me from the above conversation whether this feature will be implemented or it will not. I can surely close it if it is not in plans anytime soon.

Read more comments on GitHub >

github_iconTop Results From Across the Web

24 Evaluation Metrics for Binary Classification (And When to ...
Classification metrics let you assess the performance of machine learning models ... It's the harmonic mean between precision and recall.
Read more >
Precision vs. Average Precision | Baeldung on Computer ...
Precision describes a specific ML classifier. In contrast, the average precision evaluates a family of classifiers. To explain the difference, ...
Read more >
Performance metrics for binary classifier (in simple words)
Balanced Accuracy. It is calculated as the average of sensitivity and specificity, i.e. the average of the proportion corrects of each class ...
Read more >
sklearn.metrics.average_precision_score
Compute average precision (AP) from prediction scores. AP summarizes a precision-recall curve as the weighted mean of precisions achieved at each threshold, ...
Read more >
Which metrics are used to evaluate a binary classification ...
Average Precision score (adjusted) is the best measure to use in the case where the positive class is the minority. Hence it is...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found