[bug] SVDKL demo error
See original GitHub issueWhen running the demo code of SVDKL, I receive the following error at the last step: `(Epoch 1) Minibatch: 0% 0/196 [00:00<?, ?it/s]
/home/j/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/utils/cholesky.py:44: NumericalWarning: A not p.d., added jitter of 1.0e-06 to the diagonal warnings.warn(f"A not p.d., added jitter of {jitter_new:.1e} to the diagonal", NumericalWarning) /home/j/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/utils/cholesky.py:44: NumericalWarning: A not p.d., added jitter of 1.0e-05 to the diagonal warnings.warn(f"A not p.d., added jitter of {jitter_new:.1e} to the diagonal", NumericalWarning) /home/j/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/utils/cholesky.py:44: NumericalWarning: A not p.d., added jitter of 1.0e-04 to the diagonal warnings.warn(f"A not p.d., added jitter of {jitter_new:.1e} to the diagonal", NumericalWarning)
NotPSDError Traceback (most recent call last) <ipython-input-11-f33484a0f2d7> in <module> 1 for epoch in range(1, n_epochs + 1): 2 with gpytorch.settings.use_toeplitz(False): ----> 3 train(epoch) 4 test() 5 scheduler.step()
<ipython-input-10-bc9a57700a55> in train(epoch) 21 data, target = data.cuda(), target.cuda() 22 optimizer.zero_grad() —> 23 output = model(data) 24 loss = -mll(output, target) 25 loss.backward()
~/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/module.py in call(self, *inputs, **kwargs) 28 29 def call(self, *inputs, **kwargs): —> 30 outputs = self.forward(*inputs, **kwargs) 31 if isinstance(outputs, list): 32 return [_validate_module_outputs(output) for output in outputs]
<ipython-input-9-fc172c26d12f> in forward(self, x) 15 # This next line makes it so that we learn a GP for each feature 16 features = features.transpose(-1, -2).unsqueeze(-1) —> 17 res = self.gp_layer(features) 18 return res 19
~/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/models/approximate_gp.py in call(self, inputs, prior, **kwargs) 79 if inputs.dim() == 1: 80 inputs = inputs.unsqueeze(-1) —> 81 return self.variational_strategy(inputs, prior=prior, **kwargs)
~/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/variational/independent_multitask_variational_strategy.py in call(self, x, prior, **kwargs) 45 46 def call(self, x, prior=False, **kwargs): —> 47 function_dist = self.base_variational_strategy(x, prior=prior, **kwargs) 48 if ( 49 self.task_dim > 0
~/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/variational/_variational_strategy.py in call(self, x, prior, **kwargs) 109 if not self.variational_params_initialized.item(): 110 prior_dist = self.prior_distribution –> 111 self.variational_distribution.initialize_variational_distribution(prior_dist) 112 self.variational_params_initialized.fill(1) 113
~/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/variational/cholesky_variational_distribution.py in initialize_variational_distribution(self, prior_dist) 51 self.variational_mean.data.copy_(prior_dist.mean) 52 self.variational_mean.data.add_(torch.randn_like(prior_dist.mean), alpha=self.mean_init_std) —> 53 self.chol_variational_covar.data.copy_(prior_dist.lazy_covariance_matrix.cholesky().evaluate())
~/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/lazy/lazy_tensor.py in cholesky(self, upper) 960 (LazyTensor) Cholesky factor (triangular, upper/lower depending on “upper” arg) 961 “”" –> 962 chol = self._cholesky(upper=False) 963 if upper: 964 chol = chol._transpose_nonbatch()
~/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/utils/memoize.py in g(self, *args, **kwargs) 57 kwargs_pkl = pickle.dumps(kwargs) 58 if not _is_in_cache(self, cache_name, *args, kwargs_pkl=kwargs_pkl): —> 59 return _add_to_cache(self, cache_name, method(self, *args, **kwargs), *args, kwargs_pkl=kwargs_pkl) 60 return _get_from_cache(self, cache_name, *args, kwargs_pkl=kwargs_pkl) 61
~/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/lazy/lazy_tensor.py in _cholesky(self, upper) 423 424 # contiguous call is necessary here –> 425 cholesky = psd_safe_cholesky(evaluated_mat, upper=upper).contiguous() 426 return TriangularLazyTensor(cholesky, upper=upper) 427
~/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/utils/cholesky.py in psd_safe_cholesky(A, upper, out, jitter, max_tries) 106 Number of attempts (with successively increasing jitter) to make before raising an error. 107 “”" –> 108 L = _psd_safe_cholesky(A, out=out, jitter=jitter, max_tries=max_tries) 109 if upper: 110 if out is not None:
~/v/anaconda3/envs/ggp/lib/python3.6/site-packages/gpytorch/utils/cholesky.py in _psd_safe_cholesky(A, out, jitter, max_tries) 46 if not torch.any(info): 47 return L —> 48 raise NotPSDError(f"Matrix not positive definite after repeatedly adding jitter up to {jitter_new:.1e}.") 49 50
NotPSDError: Matrix not positive definite after repeatedly adding jitter up to 1.0e-04.`
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (3 by maintainers)
Top GitHub Comments
@LichuanRen I will take a look later today.
Hi @dublinsky,
have you been able to figure out what causes the warning?
Many thanks