Support Running GNNs on Specific GPU
See original GitHub issueHi DeepRobust Team,
I’ve encountered a problem when testing GNNs on devices other than cuda:0
. To reproduce, simply modifying line 18 in test_gcn.py
to
device = torch.device("cuda:2" if torch.cuda.is_available() else "cpu")
Then the error message is prompted as follows:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices
I found it’s caused by normalize_adj_tensor
and degree_normalize_adj_tensor
, where the device is set to be cuda:0
by default wherever adj is. Though many people run experiments on cuda:0
by default, it could be even better to support running GNNs in DeepRobust on other devices specified via the device
variable.
Could you add this feature in the next update? Thank you. 😀
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (2 by maintainers)
Top Results From Across the Web
Guide for GPU Selection for GNSS Simulation - Orolia
This document is designed to assist you with GPU Selection for GNSS Simulation – especially as it pertains to your specific application.
Read more >The transformational role of GPU computing and deep ...
Herein, we discuss the effects of GPU-supported parallelization and DL model development and application on the timescale and accuracy of ...
Read more >Scaling graph-neural-network training with CPU-GPU clusters
In a paper we're presenting at this year's KDD, my colleagues and I describe a new approach to distributed training of GNNs that...
Read more >TC-GNN: Accelerating Sparse Graph Neural Network ... - arXiv
Abstract. Recently, graph neural networks (GNNs), as the backbone of graph-based machine learning, demonstrate great success.
Read more >Optimizing Fraud Detection in Financial Services with Graph ...
In recent years, graph neural networks (GNNs) have gained traction for ... We have added GPU support for unified virtual addressing (UVA), ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Sorry for the late reply. I have just fixed the issue. Thank you @LFhase and @EdisonLeeeee !
Thank you @ChandlerBang and @EdisonLeeeee. I think @EdisonLeeeee 's solution would do the job if
adj.device
can be always accessed in each call of the two functions. Would you like to consider updating the two lines if it’s feasible? 😃BTW, I don’t have further questions.