[Docs][Question] Example on how to adapt Multitask GP Regression to multi dimentional inputs.
See original GitHub issueHow can one adapt the example on:
which I can reproduce here for convenience:
train_x = torch.linspace(0, 1, 100)
train_y = torch.stack([
torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2,
torch.cos(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2,
], -1)
to the case of 2 dimensional inputs, for example?
More specifically, I would like for the sine and cosine to have 2 inputs instead of 1.
for example, like this:
x = np.arange(-5, 5, 0.1)
y = np.arange(-5, 5, 0.1)
xx, yy = np.meshgrid(x, y)
z1 = np.sin(xx**2 + yy**2) / (xx**2 + yy**2)
z2 = np.cos(xx**2 + yy**2) / (xx**2 + yy**2)
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (7 by maintainers)
Top Results From Across the Web
[Question] Implementing multi-output multi-task approximate GP
I am looking into implementing a model that produces multiple correlated output for multiple tasks (multi-task-multi-output - MTMO).
Read more >Multitask GP Regression — GPyTorch 1.9.0 documentation
Multitask regression, introduced in this paper learns similarities in the outputs simultaneously. It's useful when you are performing regression on multiple ...
Read more >How do I use the GPML package for multi dimensional input?
Now I have my own data for regression where the x (input) matrix is a 54x10 matrix (54 samples, 10 input vars), and...
Read more >Cluster-Specific Predictions with Multi-Task Gaussian Processes
We establish explicit formulas for integrating the mean processes and the latent clustering variables within a predictive distribution,.
Read more >Multiple-output Gaussian Process regression in scikit-learn
As a prelude, let's make clear that the concepts of variance & standard deviation are defined only for scalar variables; for vector variables...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
This isn’t a dimensionality issue. Numpy’s default dtype is double while torch’s is float.
You need to do
train_x = train_x.float()
andtrain_y = train_y.float()
to convert the data to fp32I did not know about that. Thanks again!