question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Example with ranking metrices

See original GitHub issue

Describe the issue linked to the documentation

Some of the metrices in #2805 were implemented in #7739.

Suggest a potential alternative/fix

It would be nice to add (to) an example the usage of those ranking metrices with the addition of, e.g., kendall’s tau and spearman’s rho:

from scipy.stats import kendalltau, spearmanr
from sklearn.metrics import make_scorer

kenall_tau_score = make_scorer(kendalltau)
spearman_rho_score = make_scorer(kendalltau)

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:9 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
sveneschlbeckcommented, Nov 8, 2021

@rth I agree…since having a Recommendation Engine example seemed to be in the interest of multiple people, I got to that first. The other points are also valid but (as you mentioned) not necessarily well-combineable with rec engines

0reactions
rthcommented, Nov 8, 2021

I don’t think that we have any example tackling the problem of recommendation. It would be nice to have a full example with a predictive model and the way to evaluate it?

As far as I know ranking is not necessarily a primary evaluation metric in recommendation (see e.g. lightfm.evaluation): one cares more about how many relevant predictions are made in the first N, instead of whether it’s first or third. Though I agree that it would still be be good to have a recommendation example, but maybe more for top_k_accuracy_score metric (which also doesn’t have any examples apparently)?

For DCG and NDCG what comes to mind is more a search or directly a ranking problem. There are a few ranking problems on OpenML maybe we could pick one (or find some other open dataset and put it there)? Though of course we can also illustrate them on a recommendation example.

A side comment that https://www.openml.org/d/40916 looks interesting, but maybe too political. I do wonder how does the partial dependence plot of “Dystopia” wrt “Happines” looks like 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

Evaluation Metrics for Ranking problems: Introduction and ...
Example : AP Calculated using the algorithm · At rank 1: RunningSum = 0 + 1 1 = 1 , CorrectPredictions = 1...
Read more >
20 Popular Machine Learning Metrics. Part 2: Ranking ...
Some of the popular metrics here include: Pearson correlation coefficient, coefficient of determination (R²), Spearman's rank correlation ...
Read more >
Metrics for evaluating ranking algorithms - Cross Validated
Three relevant metrics are top-k accuracy, precision@k and recall@k. The k depends on your application. For all of them, for the ranking-queries ...
Read more >
Search, Ranking & Evaluation Metrics - LinkedIn
The goal of this post is to give high level introduction to search process, ranking and then different evaluation metrics available.
Read more >
MRR vs MAP vs NDCG: Rank-Aware Evaluation Metrics And ...
The MRR metric does not evaluate the rest of the list of recommended items. It focuses on a single item from the list....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found