question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Batch inner product decoder for VGAE

See original GitHub issue

Hi,

Do you guys have any idea on how to implement an inner product decoder for architectures such as VGAE (Eq. 2) using mini-batches in torch_geometric?

Thank you!

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:11 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
dskodacommented, Jul 30, 2018

Thank you for your reply, @rusty1s !

After a few changes in your code, it was possible to make it. Here’s what worked for me:

row = torch.arange(0, data.num_nodes, dtype=torch.long)
col = data.batch
index = torch.stack([row, col], dim=0)
value = Z.new_ones(data.num_nodes)
size = torch.Size([data.num_nodes, data.batch[-1].item() + 1]) # added one to the index of last batch
assignment = torch.sparse_coo_tensor(index, value, size).to_dense() # converted sparse matrix to dense

mask = torch.matmul(assignment, assignment.t())

out = torch.matmul(Z, Z.t())
out = out * mask # masking with element-wise multiplication

I couldn’t make the line torch.matmul(assignment, assignment.t()) work for sparse matrices. It returned the following error: RuntimeError: Sparse tensors do not have strides.

I suppose this way is not the optimal, since performing dense matrix operations instead of sparse ones will take more time.

1reaction
rusty1scommented, Jul 30, 2018

If I‘m not mistaken, masking could be achieved by:

row = torch.arange(0, data.num_nodes, dtype=torch.long)
col = data.batch
index = torch.stack([row, col], dim=0)
value = Z.new_ones(data.num_nodes)
size = torch.Size([data.num_nodes, data.batch[-1]])
assignment = torch.sparse_coo_tensor(index, value, size).to_dense()

mask = torch.matmul(assignment, assignment.t())

out = torch.matmul(Z, Z.t())
out = out * mask

You can then train against your densified input graph. However, this involves computing and comparing two N x N dense matrices, but this is more a problem of VGAE than with the proposed approach.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Batch inner product decoder for VGAE · Issue #17 - GitHub
Hi, Do you guys have any idea on how to implement an inner product decoder for architectures such as VGAE (Eq. 2) using...
Read more >
Source code for torch_geometric.nn.models.autoencoder
Module): r"""The inner product decoder from the `"Variational Graph ... [docs]class VGAE(GAE): r"""The Variational Graph Auto-Encoder model from the ...
Read more >
Effective Decoding in Graph Auto-Encoder Using Triadic ...
However, the decoder is relatively primitive, and prediction of a link is based simply on the inner product between the latent representations of...
Read more >
Variational Graph Auto-Encoders | DeepAI
11/21/16 - We introduce the variational graph auto-encoder (VGAE), ... using a graph convolutional network (GCN) encoder and a simple inner product decoder....
Read more >
Rethinking Graph Auto-Encoder Models for Attributed ... - arXiv
Another example is GMM-VGAE (Variational Graph Auto-. Encoder with Gaussian Mixture ... In another simplification, we use the inner product for measuring.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found