Error when running Exact GP regression
See original GitHub issueI’m using GPyTorch 0.1.1
and PyTorch 1.0.0
on Ubuntu 16.04.
I was trying to use ExactGP
to fit the data look like this (dots are training data)
The training code is exactly the same as one in Simple GP regression tutorial
But as the training goes, I run into this error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-39-75020af4f8c3> in <module>()
22 output = model(train_x)
23 # Calc loss and backprop gradients
---> 24 loss = -mll(output, train_y)
25 loss.backward()
26 if not(i%500):
~/.conda/envs/pyro/lib/python3.6/site-packages/gpytorch/module.py in __call__(self, *inputs, **kwargs)
20
21 def __call__(self, *inputs, **kwargs):
---> 22 outputs = self.forward(*inputs, **kwargs)
23
24 if isinstance(outputs, tuple):
~/.conda/envs/pyro/lib/python3.6/site-packages/gpytorch/mlls/exact_marginal_log_likelihood.py in forward(self, output, target, *params)
26 # Get the log prob of the marginal distribution
27 output = self.likelihood(output, *params)
---> 28 res = output.log_prob(target)
29
30 # Add terms for SGPR / when inducing points are learned
~/.conda/envs/pyro/lib/python3.6/site-packages/gpytorch/distributions/multivariate_normal.py in log_prob(self, value)
121
122 # Get log determininat and first part of quadratic form
--> 123 inv_quad, log_det = covar.inv_quad_log_det(inv_quad_rhs=diff.unsqueeze(-1), log_det=True)
124
125 res = -0.5 * sum([inv_quad, log_det, diff.size(-1) * math.log(2 * math.pi)])
~/.conda/envs/pyro/lib/python3.6/site-packages/gpytorch/lazy/lazy_tensor.py in inv_quad_log_det(self, inv_quad_rhs, log_det, reduce_inv_quad)
715 preconditioner=self._preconditioner()[0],
716 log_det_correction=self._preconditioner()[1],
--> 717 )(*args)
718
719 if inv_quad_term.numel() and reduce_inv_quad:
TypeError: InvQuadLogDet.forward: expected Variable (got float) for return value 1
However, this error doesn’t always happen, it only appears in 2 out of 10 runs. I haven’t managed to trained it even for cases the error didn’t show up (but that may be a different issue). Any ideas would be appreciated.
Issue Analytics
- State:
- Created 5 years ago
- Comments:10 (4 by maintainers)
Top Results From Across the Web
Gaussian Process Regression with Location Errors - arXiv
The problem of Gaussian process regression with location errors addressed in this paper is to predict x at unobserved (exact) locations x(s∗) given ......
Read more >Including error dependent on output in Gaussian Process ...
I have a set of experimental data that I am trying to fit using Gaussian process regression (GPR) using Python's sklearn package. The...
Read more >Interpolation error of Gaussian process regression for ...
We consider the interpolation error for the case of misspecified. Gaussian process regression: a used covariance function differs from a true one. We...
Read more >Exact GP Regression on Classification Labels
In this notebok, we demonstrate how one can convert classification problems into regression problems by performing fixed noise regression on the classification ...
Read more >LEARNING WHAT NOT TO MODEL: GAUSSIAN PRO
of Gaussian Process regression and helps the model converge faster as the size ... easily incorporated for various types of GP models (e.g.,...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I would recommend fixing the noise to something really small, or set a prior on the noise parameter. @Balandat is also working on bounds for the parameters, which might help.
You’re running a ton of iterations to fit the model - my guess is that there is very little noise in the data and the inferred noise level eventually gets so small that you run into numerical errors. At what iteration do you start to see this fail?