question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

add sklearn.metrics Display class to plot Precision/Recall/F1 for probability thresholds

See original GitHub issue

Describe the workflow you want to enable

Working with binary classifiers I often, in addition to PR-curve and ROC-curve, need Precision / Recall / F1 (y-axis) for probability thresholds (x-axis).

Describe your proposed solution

import numpy as np
from sklearn.metrics import precision_recall_curve, PrecisionRecallF1Display

y_true = np.array([0, 0, 1, 1])
y_scores = np.array([0.1, 0.4, 0.35, 0.8])

precision, recall, thresholds = precision_recall_curve(y_true, y_scores)

display = PrecisionRecallF1Display(precision, recall, thresholds, plot_f1=True)
display.plot()
prf1-curve

Describe alternatives you’ve considered, if relevant

No response

Additional context

No response

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:13 (7 by maintainers)

github_iconTop GitHub Comments

5reactions
adrinjalalicommented, Nov 10, 2021

I’m quite in favor of one or more plots which investigate the threshold. I’d agree with @dayyass that the two types of plots discussed here are very different in nature and in what they convey. I personally understand the proposed plot in the OP much better than a precision recall curve, and that’s probably because I have never ended up dealing with precision recall curves much in my career, whereas I’ve worked on finding thresholds for my models in a few instances.

In terms of API, I’d be more happy with something MetricThresholdCurve which can accept different metrics (precision, recall, f1, …) and plot it against the thresholds. We can then use it 3 times to achieve what the OP suggests.

1reaction
glemaitrecommented, Oct 22, 2021

This would also motivate some example/analysis when introducing the meta-estimator developed in: https://github.com/scikit-learn/scikit-learn/pull/16525

Read more comments on GitHub >

github_iconTop Results From Across the Web

sklearn.metrics.precision_recall_curve
Compute precision-recall pairs for different probability thresholds. Note: this implementation is restricted to the binary classification task. The precision is ...
Read more >
How to Calculate Precision, Recall, F1, and More for Deep ...
How to use the scikit-learn metrics API to evaluate a deep learning model. How to make both class and probability predictions with a...
Read more >
Classification Model Scoring with Scikit-Learn
In this tutorial we look at the differences between accuracy, precision, and recall, plus other metrics used to evaluate classification ...
Read more >
Implementing Precision, Recall, F1, & AUC in Python - YouTube
Today we implement all of the binary classification metrics in Python: Recall / Sensitivity / True Positive Rate, Specificity / True ...
Read more >
Evaluation Metrics and scoring - Andreas Mueller
FIXME add plotting FIXME macro vs weighted average example FIXME roc auc not ... from sklearn.metrics import accuracy_score for y_pred in ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found