question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Confidence of classification output

See original GitHub issue

Hello!

Thank you for your awesome work, I have been using gpytorch for a bit now and love it!

I was wondering how it is possible to obtain the confidence or variance of a classification output especially in the case of the DKL example, i.e. classification with multiple classes.

If there is no built-in way, the two ideas I had would be to either take the variance as output by the Gaussian variational distribution or to compute the variance of the categorical distribution, i.e. use p_i(1-p_i). Does gpytorch offer a standard way for deriving a confidence measure that I am missing here?

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:2
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

4reactions
jacobrgardnercommented, Jan 11, 2019

Variational inference (and therefore classification) follow the same pattern as exact regression, where passing data through the model in eval mode gets you p(f|D) (or in the variational setting q(f|D)), and passing that through the likelihood gets you p(y|D) (or q(y|D)).

In other words, something like:

model.eval()
preds_f = model(test_x)
preds_f.variance

Will get you variances of the latent function(s) f before they were mixed in the multiclass likelihood to form the categorical distribution. Note that if there are multiple GPs (for example in the standard SV-DKL implementation where an independent GP is used for each output feature of the neural network), this will get you variances for each GP, and you may need some means of combining the separate variances.

2reactions
cherepanoviccommented, Dec 17, 2019

what is about uncertainty? It is possible to get it for each classification?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Classification with Confidence - CMU Statistics
SUMMARY. A framework of classification is developed with a notion of confidence. In this framework, a classifier consists of two tolerance regions in...
Read more >
scikit learn - Classification algorithms that return confidence?
If you want confidence of classification result, you have two ways. First is using the classifier that will output probabilistic score, ...
Read more >
Confidence Intervals for Machine Learning
Classification problems are those where a label or class outcome variable is predicted given some input data. ... This accuracy can be calculated ......
Read more >
How to use confidence scores in machine learning models
To better understand this, let's dive into the three main metrics used for classification problems: accuracy, recall and precision.
Read more >
Machine Learning Confidence Scores — All You Need to ...
A Confidence Score is a number between 0 and 1 that represents the likelihood that the output of a Machine Learning model is...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found