Does spspmm operation support autograd?
See original GitHub issueHi, you say autograd is supported for values tensors, but it seems it doesn’t work in spspmm.
Like this:
indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])
valueA = torch.tensor([1, 2.0, 3, 4, 5], requires_grad=True)
indexB = torch.tensor([[0, 2], [1, 0]])
valueB = torch.tensor([2, 4.0], requires_grad=True)
indexC, valueC = torch_sparse.spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)
print(valueC.requires_grad)
print(valueC.grad_fn)
And the answer is:
False
None
In my case, I want to parameterize the sparse adjacent matrix and feature matrix in GCN, so the inputs need to be both differentiable. I wonder if there’re some bugs or just the way it is.
Regards.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:17 (6 by maintainers)
Top Results From Across the Web
torch-sparse - PyPI
This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently consists of the following ......
Read more >Autograd mechanics — PyTorch 1.13 documentation
Supporting in-place operations in autograd is a hard matter, and we discourage their use in most cases. Autograd's aggressive buffer freeing and reuse...
Read more >dgl.ops — DGL 0.9.1post1 documentation - DGL Docs
All operators are equipped with autograd (computing the input gradients given ... Note that we support dot operator, which semantically is the same...
Read more >PyTorch Autograd - Towards Data Science
This is where PyTorch's autograd comes in. ... to create tensors that support gradient calculations and operation tracking but as of PyTorch ...
Read more >Pytorch autograd explained - Kaggle
This is necessary because arbitrary operations on a tensor are not supported by autograd—only supported operations defined by the PyTorch API are.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hey! Thanks for your great work! I have installed the
0.4.4
release oftorch_sparse
and it totally works out in my experiments! Maybe you could add this information to the documentation. It takes me so long to figure out this no-autograd problem.Thanks a lot again!
With PyTorch 1.12, I assume you can also try to use the sparse-matrix multiplication from PyTorch directly. PyTorch recently integrated better sparse matrix support into its library 😃