Confidence of classification output
See original GitHub issueHello!
Thank you for your awesome work, I have been using gpytorch for a bit now and love it!
I was wondering how it is possible to obtain the confidence or variance of a classification output especially in the case of the DKL example, i.e. classification with multiple classes.
If there is no built-in way, the two ideas I had would be to either take the variance as output by the Gaussian variational distribution or to compute the variance of the categorical distribution, i.e. use p_i(1-p_i)
. Does gpytorch offer a standard way for deriving a confidence measure that I am missing here?
Issue Analytics
- State:
- Created 5 years ago
- Reactions:2
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Classification with Confidence - CMU Statistics
SUMMARY. A framework of classification is developed with a notion of confidence. In this framework, a classifier consists of two tolerance regions in...
Read more >scikit learn - Classification algorithms that return confidence?
If you want confidence of classification result, you have two ways. First is using the classifier that will output probabilistic score, ...
Read more >Confidence Intervals for Machine Learning
Classification problems are those where a label or class outcome variable is predicted given some input data. ... This accuracy can be calculated ......
Read more >How to use confidence scores in machine learning models
To better understand this, let's dive into the three main metrics used for classification problems: accuracy, recall and precision.
Read more >Machine Learning Confidence Scores — All You Need to ...
A Confidence Score is a number between 0 and 1 that represents the likelihood that the output of a Machine Learning model is...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Variational inference (and therefore classification) follow the same pattern as exact regression, where passing data through the model in eval mode gets you
p(f|D)
(or in the variational settingq(f|D)
), and passing that through the likelihood gets youp(y|D)
(orq(y|D)
).In other words, something like:
Will get you variances of the latent function(s)
f
before they were mixed in the multiclass likelihood to form the categorical distribution. Note that if there are multiple GPs (for example in the standard SV-DKL implementation where an independent GP is used for each output feature of the neural network), this will get you variances for each GP, and you may need some means of combining the separate variances.what is about uncertainty? It is possible to get it for each classification?