Indexing performance in backward
See original GitHub issueHi,
First of all, thanks for the great library!
I’ve been experimenting with it and noticed that indexing in nn.MessagePassing
is using regular indexing (i.e., x[idx]
). People have suggested in this issue that either torch.index_select()
or torch.nn.functional.embedding
might be faster.
Do you have any thoughts on this? For the use case in MessagePassing
, I think we might benefit from using the less general index_select
, for instance. I’d be happy to add a PR for that, if you’re interested.
Issue Analytics
- State:
- Created 5 years ago
- Comments:5 (5 by maintainers)
Top Results From Across the Web
Indexing performance in backward · Issue #95 - GitHub
This addresses issue #95. Regular tensor indexing has performance issues in the backward pass on GPU. Since torch. nn.
Read more >SQL Server Index Backward Scan: Understanding, Tuning
In specific situations, SQL Server Engine finds that reading of the index data from the end to the beginning with the Backward scan...
Read more >Speeding up index scan backwards query
When I run Explain Analyze, I see that it is using an index scan backwards on the slow query ( desc ).
Read more >Read from the Right End of the Index: BACKWARD Scans
This is a common enough situation: you have a table with a clustered index on an integer value which increases with each row....
Read more >Backwards Scans - Brent Ozar Unlimited®
Backwards scans occur when SQL decides that it's faster to start at the end of an index and scan towards the beginning. This...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Speed ups are really impressive 😃 Would you like to submit a PR? Thank you in advance.
I’ve written this (admittedly hacky) script to test
x[idx]
,torch.index_select
, andtorch.nn.functional.embedding
using your gcn.py script as a starting point. Now, the use case is not necessarily general (also the model/data are small), but I guess it’s enough to showcase the aforementioned issue.I’ve run it using my laptop’s GPU and got the following results using PyTorch’s bottleneck tool.
torch.index_select
torch.nn.functional.embedding