question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Does spspmm operation support autograd?

See original GitHub issue

Hi, you say autograd is supported for values tensors, but it seems it doesn’t work in spspmm.

Like this:

indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])
valueA = torch.tensor([1, 2.0, 3, 4, 5], requires_grad=True)
indexB = torch.tensor([[0, 2], [1, 0]])
valueB = torch.tensor([2, 4.0], requires_grad=True)
indexC, valueC = torch_sparse.spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)

print(valueC.requires_grad)
print(valueC.grad_fn)

And the answer is:

False
None

In my case, I want to parameterize the sparse adjacent matrix and feature matrix in GCN, so the inputs need to be both differentiable. I wonder if there’re some bugs or just the way it is.

Regards.

Issue Analytics

  • State:open
  • Created 4 years ago
  • Reactions:1
  • Comments:17 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
changym3commented, Mar 10, 2020

That’s the only function that does not have proper autograd support. Gradients for sparse-sparse matrix multiplication are quite difficult to obtain (since they are usually dense). I had a working, but slow implementation up to 0.4.4 release, but removed it since it wasn’t a really good implementation. If you desperately need it, feel free to try it out.

Hey! Thanks for your great work! I have installed the 0.4.4 release of torch_sparse and it totally works out in my experiments! Maybe you could add this information to the documentation. It takes me so long to figure out this no-autograd problem.

Thanks a lot again!

0reactions
rusty1scommented, Oct 11, 2022

With PyTorch 1.12, I assume you can also try to use the sparse-matrix multiplication from PyTorch directly. PyTorch recently integrated better sparse matrix support into its library 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

torch-sparse - PyPI
This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently consists of the following ......
Read more >
Autograd mechanics — PyTorch 1.13 documentation
Supporting in-place operations in autograd is a hard matter, and we discourage their use in most cases. Autograd's aggressive buffer freeing and reuse...
Read more >
dgl.ops — DGL 0.9.1post1 documentation - DGL Docs
All operators are equipped with autograd (computing the input gradients given ... Note that we support dot operator, which semantically is the same...
Read more >
PyTorch Autograd - Towards Data Science
This is where PyTorch's autograd comes in. ... to create tensors that support gradient calculations and operation tracking but as of PyTorch ...
Read more >
Pytorch autograd explained - Kaggle
This is necessary because arbitrary operations on a tensor are not supported by autograd—only supported operations defined by the PyTorch API are.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found