question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to combine data batch with GCN?

See original GitHub issue

🐛 Bug

Hi!, Thanks for your amazing framework. But recently, I am to using GCN to extract spatial-feature from node features in a batch way, but it got the error like this:

ValueError: `MessagePassing.propagate` only supports `torch.LongTensor` of shape 
`[2, num_messages]` or `torch_sparse.SparseTensor` for argument edge_index`.

The code I am using

The edge_index and edge_weight is defined as below:

graph_edges = torch.tensor(graph_edges, dtype=torch.long, device="cuda:0").t().contiguous()
graph_edges_b = torch.stack([graph_edges for x in range(location_t.shape[0])], dim=0)
print("graph_weight", graph_weight.shape)      # [16, 171]
print("graph_edges_b", graph_edges_b.shape) # [16, 2, 171]
print("feature_t", feature_t.shape)                     # [16, 19, 512] 

then

enc_feature = self.g_conv1(feature_t, graph_edges_b, graph_weight)

it feeds back the error as below:

ValueError: `MessagePassing.propagate` only supports `torch.LongTensor` of shape `[2, num_messages]` or `torch_sparse.SparseTensor` for argument `edge_index`.

For more details:

The batch_size is 16, and all the graph is fully-connected, but the weights is different for each graph.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:9 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
rusty1scommented, Aug 23, 2021

You can concatenate features as follows to avoid the for-loop:

feature = torch.cat(feature_t, dim=0)
graph_weight = torch.cat(graph_weight, dim=0)
cumsum = 0
edge_indices = []
for edge_index, x in zip(graph_edges_b, feature_t):
    edge_indices.append(edge_index + cumsum)
    cumsum += x.size(0)
graph_edges = torch.cat(edge_indices, dim=-1)

hidden_enc_feature = F.relu(self.g_conv1(feature, graph_edges, graph_weight)
enc_feature = F.relu(self.g_conv2(hidden_enc_feature, graph_edges, graph_weight)
out_batch = enc_feature.view(batch_size, num_nodes, -1)
0reactions
TianhangWangcommented, Sep 14, 2021

I see! Thanks a lot! Hope you have a good day!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Graph Classification & Batchwise Training · Issue #4 · tkipf/gcn
I have modified the graphconv layer to dense matrix to work with parallel data loader. And the "ind" (size: N*batch, values are normalized...
Read more >
Advanced Mini-Batching — pytorch_geometric documentation
In its most general form, the PyG DataLoader will automatically increment the edge_index tensor by the cumulated number of nodes of all graphs...
Read more >
6.1 Training GNN for Node Classification with Neighborhood ...
To use a sampler provided by DGL, one also need to combine it with DataLoader , which iterates over a set of indices...
Read more >
GCN with Neo4j and PyTorch Using MUTAG Dataset in Ten ...
I shuffled and split the original MUTAG dataset into a train and test set. Then I created data loaders for each set with...
Read more >
Simple scalable graph neural networks | by Michael Bronstein
In graph-sampling approaches, for each batch, a subgraph of the original graph is sampled, and a full GCN-like model is run on the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found