question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

White Noise kernel addition with grid interpolation

See original GitHub issue

Hi all,

I’m unable to run a non-exact GP model with white noise kernel addition. Specifically, I tried the Kronecker classification and regression examples, along with the additive classification example, substituting in: self.base_covar_module = RBFKernel(log_lengthscale_bounds=(-5, 6)) + WhiteNoiseKernel(input_variance)

Where input variance is: input_variance = torch.squeeze(torch.from_numpy(numpy.random.rand(len(train_y),1) / 100.))

In the Kronecker example I receive an error related to dimension size:

The expanded size of the tensor (1) must match the existing size (900) at non-singleton dimension 1

And with the additive classification example I receive an error of:

‘SumLazyVariable’ object has no attribute ‘repeat’

Any thoughts on implementing kernel addition at the grid inducing points?

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
jacobrgardnercommented, Jun 19, 2018

Hmmmm.

This is a little trickier actually. In classification, you don’t define the grid interpolation kernel, the GridInducingVariationalGP handles it for you (because it needs to learn things related to the inducing points). See our kissgp classification example, and note that we just call the RBF kernel. The problem being, of course, that we can’t just add the WhiteNoiseKernel to the RBF kernel, because the WhiteNoiseKernel is ill-defined for the inducing points.

I think it’s technically easy enough to solve, we basically want to add the DiagLazyVariable that WhiteNoiseKernel returns to the test covariance here: https://github.com/cornellius-gp/gpytorch/blob/1578f80b18056b0d1cc6d0386048f1fd83499c49/gpytorch/models/grid_inducing_variational_gp.py#L88-L90

Something like:

test_covar = test_covar + my_white_noise_covar_module(inputs)

This could be accomplished by extending GridInducingVariationalGP to a GridInducingPlusWhiteNoiseVariationalGP or something. However, this is obviously a little unsatisfactory from a usability stand point. Maybe @gpleiss and I can think about whether we can better support kernels during variational inference that need to operate on the data kernel but not on the inducing point kernel.

0reactions
jacobrgardnercommented, Nov 4, 2018

This is now possible to do for variational inference by way of #335. The WhiteNoiseKernel still won’t apply to the scalable methods like it does in the exact GP case, but all that’s necessary is a new variational strategy that adds the white noise at the end of the forward method of whatever base variational strategy is being used. Something like:

class WhiteNoiseVariationalStrategy(VariationalStrategy):
    # white_noise_module takes in x and returns e.g. a DiagLazyTensor or ZeroLazyTensor
    # or whatever other noise covariance matrix might be applicable.
    def __init__(self, base_variational_strategy, white_noise_module):
        super(WhiteNoiseVariationalStrategy, self).__init__(
            base_variational_strategy.model,
            base_variational_strategy.inducing_points,
            base_variational_strategy.variational_distribution
        )
        self.base_variational_strategy = base_variational_strategy
        self.white_noise_module = white_noise_module

    @property
    def prior_distribution(self):
        return self.base_variational_strategy.prior_distribution

    def forward(self, x):
        base_mvn = self.base_variational_strategy.forward(x)
        new_covar = base_mvn.lazy_covariance_matrix + white_noise_module(x)
        return MultivariateNormal(base_mvn.mean, new_covar)
Read more comments on GitHub >

github_iconTop Results From Across the Web

Kernel Interpolation for Scalable Structured Gaussian ...
Abstract. We introduce a new structured kernel inter- polation (SKI) framework, which generalises and unifies inducing point methods for scal-.
Read more >
gpytorch.kernels — GPyTorch 1.9.0 documentation
To add a scaling parameter, decorate this kernel with a gpytorch.kernels. ... See Product Kernel Interpolation for Scalable Gaussian Processes for more ...
Read more >
Deconvolution-Interpolation Gridding (DING) - NCBI - NIH
A simple iterative algorithm, termed deconvolution-interpolation gridding ... Noisy data are simulated by adding Gaussian white noise to both the real and ...
Read more >
Kernel Interpolation of High Dimensional Scattered Data - arXiv
Instead, we assume the noise to be randomly white noise, which is standard in statistics and learning theory [8, 21, 24]. The approximation ......
Read more >
Accurate Kernel Interpolation with Compactly Supported Kernels
Scalable kernel interpolation1 (SKI) is an inducing point ... eled with white observation noise on n⇤ ... by adding more points outside of...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found