Support for other integer types by MessagePassing
See original GitHub issue🐛 Describe the bug
Why does PyG enforces edge_index
to be of type long? Certain graphs can work properly with int16
such as molecules, which will rarely surpass the 32767, unless batch sizes larger than 1000 are used. They have an average of 20 atoms. Instead, we could simply ensure that there are no negative numbers (which happen when the edge index overflows), or that the maximum of a specific datatype is not reached.
Also, some hardwares do not support long
, such as TPU and IPU, which are limited to int32
.
Environment
- PyG version: All
- PyTorch version: All
- OS: All
- Python version: All
- CUDA/cuDNN version: All
- How you installed PyTorch and PyG (
conda
,pip
, source): Any - Any other relevant information (e.g., version of
torch-scatter
): Any
Issue Analytics
- State:
- Created a year ago
- Comments:8 (7 by maintainers)
Top Results From Across the Web
Source code for torch_geometric.nn.conv.message_passing
Module): r"""Base class for creating message passing layers of the form ... raise ValueError( ('`MessagePassing.propagate` only supports integer tensors of ...
Read more >pytorch_geometric/message_passing.py at master - GitHub
('`MessagePassing.propagate` only supports integer tensors of '. 'shape `[2, num_messages]`, `torch_sparse.SparseTensor` or '. '`torch.sparse.
Read more >Message Passing Fundamentals
2. Datatype - the type of data to be sent. In the simplest cases this is an elementary type such as float/REAL, int/INTEGER,...
Read more >Reading 22: Queues and Message-Passing
We'll use blocking queues (an existing threadsafe type) to implement message passing between ... Here's a message passing module for squaring integers:
Read more >Message Passing in Java - GeeksforGeeks
It is used when threads do not have shared memory and are unable to share monitors or semaphores or any other shared variables...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Oh, my bad. I just tested with
torch.short
and was convinced it is only working fortorch.long
. I guess we can then start looking into supporting bothtorch.long
andtorch.int
.See #5281 for a proposal to relax the type assertion in the message passing interface. This patch effectively lets the execution backend decide which integer types to support when executing the aggregation step. That said, with the default CPU backend the error message changes to: