[Docs] Usage of gpytorch.settings.use_toeplitz in SKIP GPs
See original GitHub issue📚 Documentation/Examples
Example in question: https://docs.gpytorch.ai/en/v1.1.1/examples/02_Scalable_Exact_GPs/Scalable_Kernel_Interpolation_for_Products_CUDA.html
In code cell 7, train
is called with
...
with gpytorch.settings.use_toeplitz(True):
%time train()
...
Later, however, inside train
this same context is set to False
.
...
with gpytorch.settings.use_toeplitz(False), gpytorch.settings.max_root_decomposition_size(30):
# Get output from model
output = model(train_x)
...
Two questions:
- The call to
optimizer.step()
remains out of this context. Is this intentional? Does it matter? - Doesn’t setting this to
False
again undo the previous context? As I understand, the kernel we are using for grid interpolation should be utilizing Toeplitz structure?
Further, I also have this warning which seems to concern with usage of nonzero()
in PyTorch.
lib/python3.7/site-packages/gpytorch/utils/interpolation.py:119: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.)
left_boundary_pts = (lower_grid_pt_idxs < 0).nonzero()
Thanks!
Versions:
GPyTorch: 1.1.1 PyTorch: 1.6.0
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
gpytorch.settings — GPyTorch 1.9.0 documentation
Setting this to False will compute a complete Cholesky decomposition of covariance matrices. This may be infeasible for GPs with structure covariance matrices....
Read more >linear_operator.settings — GPyTorch 1.9.0 documentation
Source code for linear_operator.settings. #!/usr/bin/env python3 import logging import warnings import torch class ...
Read more >Scalable Kernel Interpolation for Product Kernels (SKIP)
In this notebook, we'll overview of how to use SKIP, a method that exploits ... in this notebook is the use of gpytorch.settings.max_root_decomposition_size ......
Read more >Advanced Usage — GPyTorch 1.6.0 documentation
Here we highlight a number of common batch GP scenarios and how to construct them in GPyTorch. Multi-output functions (with independent outputs). Batch...
Read more >GPyTorch 1.8.1 documentation
GPyTorch Regression Tutorial. Examples: Basic Usage · Exact GPs (Regression) · Exact GPs with Scalable (GPU) Inference · BlackBox ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
No
optimizer.step()
will just update the parameters by the gradients so it isn’t affected by the contextNo, I meant whether the optimizer call is within context or outside, doesn’t matter. Is this correct?