Best way to learn adjacency matrix for a graph?
See original GitHub issue❓ Questions & Help
Hi,
Apologies if this has already been posted (though I spent a good half an hour trying to find a question like this). I am trying to figure out what the best way is to learn a parameterisation of a graph (i.e. have a neural net predict from some input: the nodes, their features, and the adjacency matrix).
I see that many of the graph conv layers take in a 2D tensor of edge indices, for edge_index
, though we would not be able to backprop through this. It seems like either one would have to (a) define a fully-connected graph and instead infer the edge weights (where a weight of 0 between nodes (i,j) would effectively simulate two nodes not being connected), or if it’s possible, directly pass in the adjacency matrix as one dense (n,n) matrix (though I assume this can only be binary, so that may also be problematic).
Any thoughts? Thanks in advance.
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (3 by maintainers)
Top GitHub Comments
Note that we also provide GNNs that can operate on dense input. For example, this is done in the
DiffPool
model. An alternative way would be to sparsify your dense adjacency matrix based on a user-defined threshold (similar to a ReLU activation):If you utilize both
edge_index
andedge_weight
in your follow-up GNN, your graph generation is fully-trainable (except for the values you remove).Hi,
Thanks for your response!
In my case, I’d want to use the inferred outputs in a downstream manner (i.e., both the nodes’ features and the adjacency matrix) and have that all be backproppable, e.g.:
where
E
is the adjacency matrix andX
are the node features. I assume thatE
however needs to be sparse in order for it to work with the GNNs later on in the networkIn the case of the autoencoder its output (a dense adjacency matrix) just happens to also be the end of the network, which is convenient. In my case, it still seems like the most plausible option would be to fix the adjacency matrix to have the graph be fully-connected, and instead have the network infer edge weights instead. Let me know if you agree with this line of thinking.
Thanks again!