📚 Documentation/Examples
Hello, I’m trying to implement a Variational Multioutput model by following the example at https://docs.gpytorch.ai/en/v1.1.1/examples/04_Variational_and_Approximate_GPs/SVGP_Multitask_GP_Regression.html
In my case, the number of tasks is defined by nTasks. The input dimension is n x d and the output dimension is n x nTasks. The motivation was mentioned in my question from yesterday–the outputs need to be constrained to [0,1] range. I was advised to use BetaLikelihood.
My question is what the shape of induce points should be for the multi-output case. Is it nTasks x nLatentSpace x d ? The example documentation is not very clear. Also, is there a multitask version of Betalikelihood somewhere. Thank you in advance from a newbie.
class MultitaskVarGPModel(gpytorch.models.ApproximateGP):
def __init__(self, inducing_points):
nTasks = N*N
variational_distribution = gpytorch.variational.CholeskyVariationalDistribution(
inducing_points.size(-2), batch_shape=torch.Size([nTasks])
)
variational_strategy = gpytorch.variational.MultitaskVariationalStrategy(
gpytorch.variational.VariationalStrategy(
self, inducing_points, variational_distribution, learn_inducing_locations=True
), num_tasks=nTasks
)
super().__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean(batch_shape=torch.Size([nTasks]))
self.covar_module = gpytorch.kernels.ScaleKernel(
gpytorch.kernels.RBFKernel(batch_shape=torch.Size([nTasks])),
batch_shape=torch.Size([nTasks])
)
def forward(self, x):
# The forward function should be written as if we were dealing with each output
# dimension in batch
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
** Is there documentation missing? **
** Is documentation wrong? **
** Is there a feature that needs some example code? **
** Think you know how to fix the docs? ** (If so, we’d love a pull request from you!)
- Link to GPyTorch documentation
- Link to GPyTorch examples
Issue Analytics
- State:
- Created 3 years ago
- Comments:11 (5 by maintainers)
Top GitHub Comments
@ZhiliangWu
LMCVariationalStrategy
andIndependentMultitaskVariationalStrategy
both convert a batch of GPs into a multitask GP. For example, theIndependentMultitaskVariationalStrategy
converts the output ofd
Gaussian processes into an x d
MultitaskMultivariateNormal.Both
IndependentMultitaskVariationalStrategy
andLMCVariationalStrategy
must operate on a batch of Gaussian processes, since multiple GPs are needed for multiple output dimensions. (See section 4.2 of this paper for more information).However, it’s possible that you might want to have a batch of
c
multi-output GPs, each withd
dimensions. In this case, theIndependentMultitaskVariationalStrategy
would have a batch shape of[c, d]
, whered
would correspond to the multiple outputs, andc
would correspond to the actual batch of multi-output GPs. In this case, thetask_dim
argument specifies that thed
dimension corresponds to the GP outputs.Your understanding is correct. If you want 4 outputs, then use
IndependentMultitaskVariationalStrategy
orLMCVariationalStrategy
as you have described.Just ignore this. What I’m saying is that you can train
c
independent vector-valued GP models, each withd
output dimensions. It sounds like, from the way you describe the issue, you understand how to useIndependentMultitaskVariationalStrategy
andLMCVariationalStrategy
for your purposes.This thread is quite long and difficult for me to follow/context-switch to, so I am going to close it for now. If you still have questions, please open up a new discussion.