question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to use batch adjacency matrix with GraphUNet

See original GitHub issue

❓ Questions & Help

I want to directly input a batch of adjacency matrix(batch, n_nodes, n_nodes) and nodes(batch, n_nodes, feats) into the GraphUNet. I convert the dense matrix to sparse edge indexes. However the number of edges in the whole batch is various, the edge init edge_weight = x.new_ones(edge_index.size(1)) went wrong. Did I miss something for batch processing? How can I efficiently use the batch dense matrix as input?

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
LingxiaoShawncommented, Jul 15, 2020

Hi @rusty1s , how about this?

def to_sparse_batch(x, adj, mask):
    # transform x (B x N x D), adj (B x N x N), mask (B x N), here N is N_max
    # to x, edge_index, edge_attr/weight, batch
    B, N_max, D = x.shape
    # mask out adj 
    adj = (adj * mask.unsqueeze(2)).transpose(1,2) 
    adj = (adj * mask.unsqueeze(2)).transpose(1,2) 

    # get number of graphs 
    num_nodes_graphs = mask.sum(dim=1)  # B

    # get offset with size B
    offset_graphs = torch.cumsum(num_nodes_graphs, dim=0) # B
    
    # get x
    x = x.reshape(-1, D)[mask.reshape(-1)] # total_nodes * D
    
    # get weight and index
    edge_weight = adj[adj.nonzero(as_tuple=True)]
    nnz_index = adj.nonzero().t()
    graph_idx, edge_index = nnz_index[0], nnz_index[1:]

    # init batch
    batch = torch.zeros_like(x[:,0], dtype=torch.int64).fill_(B-1)

    # add offset to edge_index, and setup batch
    start = 0
    for i, offset in enumerate(offset_graphs[:-1]):
        edge_index[:, graph_idx==i+1] += offset
        batch[start:offset] = i
        start = offset

    return x, edge_index, edge_weight, batch
0reactions
rusty1scommented, Jul 15, 2020

Sure, feel free to add it 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to use batch adjacency matrix with GraphUNet #1207
I want to directly input a batch of adjacency matrix(batch, n_nodes, n_nodes) and nodes(batch, n_nodes, feats) into the GraphUNet. I convert the ...
Read more >
torch_geometric.nn — pytorch_geometric documentation
The adjacency matrix can include other values than 1 representing edge weights via ... batch_norm (bool, optional) – If set to True ,...
Read more >
torch_geometric.nn — pytorch_geometric 1.4.3 documentation
The adjacency tensor is broadcastable in the batch dimension, resulting in a shared adjacency matrix for the complete batch. mask (BoolTensor, optional) –...
Read more >
Using a batch of pytorch tensors as weighted adjacency ...
Hi, I'm trying to treat a batch of tensors as a batch of adjacency matrices (batch_of_adj: tensor of size [8, 512, 512], meaning...
Read more >
Mutual Information Maximization in Graph Neural Networks
little analysis of the transformation of the adjacency matrix of ... 72.16% with the usage of continuous node attribute and batch.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found