question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Question] Impact of x scale on results

See original GitHub issue

Hello, I am trying to fit some observations with the simplest model from the tutorials:

class ExactGPModel(gpytorch.models.ExactGP):
    def __init__(self, train_x, train_y, likelihood):
        super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
        self.mean_module = gpytorch.means.ConstantMean()
        self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())

    def forward(self, x):
        mean_x = self.mean_module(x)
        covar_x = self.covar_module(x)
        return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)

# Training data (omitted for brevity).
#train_y = ...
#train_x = ...

likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(train_x, train_y, likelihood)

# Training code (omitted for brevity).
# Optimizer: Adam, lr=0.1
# Loss: ExactMarginalLogLikelihood

After training, I am evaluating the model by feeding a linspace with 100 points with this pattern:

with torch.no_grad(), gpytorch.settings.fast_pred_var():
    test_x = torch.linspace(0, train_x[-1], 100, dtype=torch.double)
    observed_pred = likelihood(model(test_x))

However, I am experiencing very different results depending on how I choose to scale the x dimension.

X in [0…1]

fig

X in [0…100]

fig

X in [0…1000]

fig

I am confused: I did expect a result invariant w.r.t. the scale chosen for X dimension. Is this behaviour consistent with the theory of Gaussian Processes?

Thanks in advance for any help!

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5

github_iconTop GitHub Comments

1reaction
Balandatcommented, Nov 30, 2021

@LMolr are you observations indeed noiseless? As Wesley said you can crank down the likelihood noise sigma and not estimate it as part of the fitting. In the limit sigma -> 0 you will get the model mean pass through the observations, but a zero value may lead to numerical issues b/c of ill conditioned linear solves.

1reaction
wjmaddoxcommented, Nov 30, 2021

To try to induce the model to more closely fit the data, one suggestion would be to set the likelihood noise to be small on initialization (or even to fix it). For example, model.likelihood.noise = 1e-3 * torch.ones(1)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Your Guide to Rating Scale Questions in 2022 - Qualtrics
We show you how to design and use rating scale questions in your survey, so you can get easy-to-interpret qualitative feedback data back....
Read more >
The Impact of Question and Scale Characteristics on Scale ...
This paper advances the literature by examining empirically the impact of two question level characteristics (i.e., type of survey questions and ...
Read more >
Effects of Scale, Question Location, Order of Response ... - NCBI
Our results showed that early appearance of annoyance questions was significantly associated with higher annoyance scores. Questionnaires filled ...
Read more >
Sample Likert Scales - Marquette University
Sample Likert Scales, Division of Student Affairs, Marquette University. ... 1 – Not at all a problem. 2 – Minor problem ... Effect...
Read more >
Horizontal Scaling | Definition | Graphs | Examples - Cuemath
Horizontal Scaling is a graphing tool and scale every x-coordinate by a constant. Explore with concepts, definitions, graphs and examples, the Cuemath way....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found