question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

I get 'RuntimeError: expected scalar type Float but found Double' but my input is already a float?

See original GitHub issue

Hi I get RuntimeError: expected scalar type Float but found Double’ whenever I try to evaluate the standard GP on this data even thought the elements of the data have been converted to floats?

using the following data. But image

It’s train_data is a matrix of 10362 observations in the rows and 81 features for each observation.

model:
class ExactGPModel(gpytorch.models.ExactGP):
    def __init__(self, train_X, train_y, likelihood):
        super(ExactGPModel, self).__init__(train_X, train_y, likelihood)
        self.mean_module = gpytorch.means.ConstantMean()
        self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
        
    def forward(self, x):
        mean = self.mean_module(x)
        covar = self.covar_module(x)
        return gpytorch.distributions.MultivariateNormal(mean, covar)

# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(train_X, train_y, likelihood)

After training, when I try to evaluate on the data set I get the following: #predictions

f_preds = model(test_X)
y_preds = likelihood(model(test_X))

Error:

RuntimeError                              Traceback (most recent call last)
<ipython-input-98-c492ec43638f> in <module>
      1 #predictions
      2 
----> 3 f_preds = model(test_X)
      4 y_preds = likelihood(model(test_X))
      5 

~/opt/anaconda3/lib/python3.7/site-packages/gpytorch/models/exact_gp.py in __call__(self, *args, **kwargs)
    317             # Make the prediction
    318             with settings._use_eval_tolerance():
--> 319                 predictive_mean, predictive_covar = self.prediction_strategy.exact_prediction(full_mean, full_covar)
    320 
    321             # Reshape predictive mean to match the appropriate event shape

~/opt/anaconda3/lib/python3.7/site-packages/gpytorch/models/exact_prediction_strategies.py in exact_prediction(self, joint_mean, joint_covar)
    260 
    261         return (
--> 262             self.exact_predictive_mean(test_mean, test_train_covar),
    263             self.exact_predictive_covar(test_test_covar, test_train_covar),
    264         )

~/opt/anaconda3/lib/python3.7/site-packages/gpytorch/models/exact_prediction_strategies.py in exact_predictive_mean(self, test_mean, test_train_covar)
    278         # You **cannot* use addmv here, because test_train_covar may not actually be a non lazy tensor even for an exact
    279         # GP, and using addmv requires you to delazify test_train_covar, which is obviously a huge no-no!
--> 280         res = (test_train_covar @ self.mean_cache.unsqueeze(-1)).squeeze(-1)
    281         res = res + test_mean
    282 

~/opt/anaconda3/lib/python3.7/site-packages/gpytorch/utils/memoize.py in g(self, *args, **kwargs)
     57         kwargs_pkl = pickle.dumps(kwargs)
     58         if not _is_in_cache(self, cache_name, *args, kwargs_pkl=kwargs_pkl):
---> 59             return _add_to_cache(self, cache_name, method(self, *args, **kwargs), *args, kwargs_pkl=kwargs_pkl)
     60         return _get_from_cache(self, cache_name, *args, kwargs_pkl=kwargs_pkl)
     61 

~/opt/anaconda3/lib/python3.7/site-packages/gpytorch/models/exact_prediction_strategies.py in mean_cache(self)
    227 
    228         train_labels_offset = (self.train_labels - train_mean).unsqueeze(-1)
--> 229         mean_cache = train_train_covar.evaluate_kernel().inv_matmul(train_labels_offset).squeeze(-1)
    230 
    231         if settings.detach_test_caches.on():

~/opt/anaconda3/lib/python3.7/site-packages/gpytorch/lazy/lazy_tensor.py in inv_matmul(self, right_tensor, left_tensor)
   1173         func = InvMatmul
   1174         if left_tensor is None:
-> 1175             return func.apply(self.representation_tree(), False, right_tensor, *self.representation())
   1176         else:
   1177             return func.apply(self.representation_tree(), True, left_tensor, right_tensor, *self.representation())

~/opt/anaconda3/lib/python3.7/site-packages/gpytorch/functions/_inv_matmul.py in forward(ctx, representation_tree, has_left, *args)
     51             res = left_tensor @ res
     52         else:
---> 53             solves = _solve(lazy_tsr, right_tensor)
     54             res = solves
     55 

~/opt/anaconda3/lib/python3.7/site-packages/gpytorch/functions/_inv_matmul.py in _solve(lazy_tsr, rhs)
     19         with torch.no_grad():
     20             preconditioner = lazy_tsr.detach()._inv_matmul_preconditioner()
---> 21         return lazy_tsr._solve(rhs, preconditioner)
     22 
     23 

~/opt/anaconda3/lib/python3.7/site-packages/gpytorch/lazy/lazy_tensor.py in _solve(self, rhs, preconditioner, num_tridiag)
    662             max_iter=settings.max_cg_iterations.value(),
    663             max_tridiag_iter=settings.max_lanczos_quadrature_iterations.value(),
--> 664             preconditioner=preconditioner,
    665         )
    666 

~/opt/anaconda3/lib/python3.7/site-packages/gpytorch/utils/linear_cg.py in linear_cg(matmul_closure, rhs, n_tridiag, tolerance, eps, stop_updating_after, max_iter, max_tridiag_iter, initial_guess, preconditioner)
    172 
    173     # residual: residual_{0} = b_vec - lhs x_{0}
--> 174     residual = rhs - matmul_closure(initial_guess)
    175     batch_shape = residual.shape[:-2]
    176 

~/opt/anaconda3/lib/python3.7/site-packages/gpytorch/lazy/added_diag_lazy_tensor.py in _matmul(self, rhs)
     55 
     56     def _matmul(self, rhs):
---> 57         return torch.addcmul(self._lazy_tensor._matmul(rhs), self._diag_tensor._diag.unsqueeze(-1), rhs)
     58 
     59     def add_diag(self, added_diag):

~/opt/anaconda3/lib/python3.7/site-packages/gpytorch/lazy/non_lazy_tensor.py in _matmul(self, rhs)
     42 
     43     def _matmul(self, rhs):
---> 44         return torch.matmul(self.tensor, rhs)
     45 
     46     def _prod_batch(self, dim):

RuntimeError: expected scalar type Float but found Double

Why is this?

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5

github_iconTop GitHub Comments

1reaction
wjmaddoxcommented, May 10, 2022

try setting train_y to be a float.

to see the dtype of the parameters, list(model.named_parameters())

To set the model itself to either single or double, you can do model = model.double()

0reactions
pok99commented, May 10, 2022

Thank you, it worked!

Read more comments on GitHub >

github_iconTop Results From Across the Web

RuntimeError: Expected object of scalar type Float but got ...
When the error is RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #4 'mat1' , you would...
Read more >
RuntimeError: Expected object of scalar type Double but got ...
How to solve "RuntimeError: expected scalar type Double but found Float" when loading torchscript model in C++. Expected scalar type Double ...
Read more >
RuntimeError: expected scalar type Double but found Float #998
My function has two inputs and two outputs. import os import torch import numpy as np tkwargs = { "dtype": torch.double, "device": torch.device( ......
Read more >
expected scalar type double but found float - You.com | The AI ...
In your script you are explicitly casting the input data to .double () which means that all parameters are expected to be in...
Read more >
expected scalar type Half but found Float" when using fp16
Replace line 272-273 in <pythondistr>\Lib\site-packages\torch\nn\modules\normalization.py return F.group_norm( input, self.num_groups, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found