question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

GATConv only supports input x of dimensions 2

See original GitHub issue

I am running a GNN network on a mesh. The inputs are of sizes BxNxC where B is the batch-size, N is the number of input nodes and C is the number of channels per node. This input works well with other kinds of conv layers like GCNConv and ChebConv, but it throws an error called 'Static graphs not supported in GATConv' in GATConv. Its forward code looks like this:

def forward(self, x: Union[Tensor, OptPairTensor], edge_index: Adj,
                size: Size = None, return_attention_weights=None):
        # type: (Union[Tensor, OptPairTensor], Tensor, Size, NoneType) -> Tensor  # noqa
        # type: (Union[Tensor, OptPairTensor], SparseTensor, Size, NoneType) -> Tensor  # noqa
        # type: (Union[Tensor, OptPairTensor], Tensor, Size, bool) -> Tuple[Tensor, Tuple[Tensor, Tensor]]  # noqa
        # type: (Union[Tensor, OptPairTensor], SparseTensor, Size, bool) -> Tuple[Tensor, SparseTensor]  # noqa
        r"""
        Args:
            return_attention_weights (bool, optional): If set to :obj:`True`,
                will additionally return the tuple
                :obj:`(edge_index, attention_weights)`, holding the computed
                attention weights for each edge. (default: :obj:`None`)
        """
        H, C = self.heads, self.out_channels

        x_l: OptTensor = None
        x_r: OptTensor = None
        alpha_l: OptTensor = None
        alpha_r: OptTensor = None
        if isinstance(x, Tensor):
            assert x.dim() == 2, 'Static graphs not supported in `GATConv`.'
            x_l = x_r = self.lin_l(x).view(-1, H, C)
            alpha_l = (x_l * self.att_l).sum(dim=-1)
            alpha_r = (x_r * self.att_r).sum(dim=-1)
        else:
            x_l, x_r = x[0], x[1]
            assert x[0].dim() == 2, 'Static graphs not supported in `GATConv`.'
            x_l = self.lin_l(x_l).view(-1, H, C)
            alpha_l = (x_l * self.att_l).sum(dim=-1)
            if x_r is not None:
                x_r = self.lin_r(x_r).view(-1, H, C)
                alpha_r = (x_r * self.att_r).sum(dim=-1)

        assert x_l is not None
        assert alpha_l is not None

        if self.add_self_loops:
            if isinstance(edge_index, Tensor):
                num_nodes = x_l.size(0)
                if x_r is not None:
                    num_nodes = min(num_nodes, x_r.size(0))
                if size is not None:
                    num_nodes = min(size[0], size[1])
                edge_index, _ = remove_self_loops(edge_index)
                edge_index, _ = add_self_loops(edge_index, num_nodes=num_nodes)
            elif isinstance(edge_index, SparseTensor):
                edge_index = set_diag(edge_index)

        # propagate_type: (x: OptPairTensor, alpha: OptPairTensor)
        out = self.propagate(edge_index, x=(x_l, x_r),
                             alpha=(alpha_l, alpha_r), size=size)

        alpha = self._alpha
        self._alpha = None

        if self.concat:
            out = out.view(-1, self.heads * self.out_channels)
        else:
            out = out.mean(dim=1)

        if self.bias is not None:
            out += self.bias

        if isinstance(return_attention_weights, bool):
            assert alpha is not None
            if isinstance(edge_index, Tensor):
                return out, (edge_index, alpha)
            elif isinstance(edge_index, SparseTensor):
                return out, edge_index.set_value(alpha, layout='coo')
        else:
            return out

So it seems like its expecting the input x to be of dimensionality two, which is not the case with my input. I have the same issue with GATv2Conv which solves the static graph issue of GATConv. So, does GATConv not support multiple graph inputs as a minibatch? Or is there something I am missing here? Please help.

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:11 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
rusty1scommented, Apr 25, 2022

The x in layer needs to correspond to the node feature matrix of your data/batch object:

data_list = [Data(x=x_, edge_index=edge_index) for x_ in x] 
batch = Batch.from_data_list(data_list)
layer = GATConv(in_channels=16, out_channels=16)
result = layer(batch.x, edge_index=batch.edge_index)
1reaction
radandreicristiancommented, Apr 28, 2022

I’m working on traffic data. The shape of a batch of input data is [batch_size, seq_len, n_nodes, d_hidden]

x = torch.randn((8, 12, 207, 16))

# 1200 random edges between 206 nodes, in sparse format.
edge_index = torch.randint(high=206, size=(2, 1200))

# This basically combines the batch and seq len dim into one - new shape is [batch_size * seq_len, n_nodes, d_hidden]
x = einops.rearrange(x, 'b l n f -> (b l) n f') 

# Convert from 3D tensor to a list of 2D tensors (each is a graph, shape [n_nodes, d_hidden]).
x = list(x) 

# Build a list of Data objects, each containing an item from the list above, and the (same) edge index
x = [Data(x=x_, edge_index=edge_index) for x_ in x] 

x = Batch.from_data_list(x)

layer = GATConv(in_channels=16, out_channels=16)

result = layer(x, edge_index=edge_index)

x is not a tensor, but a DataBatch, so it goes on the “else” in line 205.

Here’s an approach that worked for me, in case anyone wants to accomplish something similar. Not sure if this is the most elegant way to accomplish the results, but it works.

x = torch.randn((8, 12, 207, 16))
edge_index = torch.randint(high=206, size=(2, 1200))
x = einops.rearrange(x, 'b l n f -> (b l) n f') 
layer = GATConv(in_channels=16, out_channels=16)
result = torch.stack([layer(graph, edge_index=edge_index) for graph in x], dim=0)
Read more comments on GitHub >

github_iconTop Results From Across the Web

GATConv only supports input x of dimensions 2 #2844 - GitHub
I am running a GNN network on a mesh. The inputs are of sizes BxNxC where B is the batch-size, N is the...
Read more >
torch_geometric.nn.conv.gat_conv - PyTorch Geometric
Args: in_channels (int or tuple): Size of each input sample, or :obj:`-1` to ... Tensor): assert x.dim() == 2, "Static graphs not supported...
Read more >
AssertionError in torch_geometric.nn.GATConv - Stack Overflow
I only found in the cheatsheet an explanation of static, that the GNN supports x with shapes [batch_size, num_nodes, in_channels] . However, GAT ......
Read more >
Convolutional Layers · GraphNeuralNetworks.jl - JuliaHub
Edge Weights: supports scalar weights (or equivalently scalar ... in : The dimension of input node features. ein : The dimension of input...
Read more >
GATConv — DGL 0.9.1post1 documentation
in_feats (int, or pair of ints) – Input feature size; i.e, the number of dimensions of h(l)i. GATConv can be applied on homogeneous...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found