Some confusion about the speed of DGCNN
See original GitHub issueFrom now on, we recommend using our discussion forum (https://github.com/rusty1s/pytorch_geometric/discussions) for general questions.
❓ Questions & Help
Thank you for the code. I have some confusions about the speed of DGCNN. I found that the speed is much slower than this implementation. I used the batch
parameter as you said in this issue. However, the speed is still very slow (about 6s a batch in inference). Besides, to my surprise, the speed of training (about 2s a batch) is faster than inference. Could you know how to slove it?
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
About the speed of DGCNN · Issue #1829 - GitHub
Questions & Help I implement an DGCNN follow the original architecture (which contains ... Some confusion about the speed of DGCNN #2172.
Read more >Airborne Laser Scanning Point Cloud Classification Using the ...
Even though the classification accuracy was 87.8%, there is high confusion between vegetation and buildings. Wicaksono et al. [21] used a DGCNN to...
Read more >Towards Efficient Point Cloud Graph Neural Networks ... - arXiv
Our approach reduces memory consumption by 20× and latency by up to 9.9× for graph layers in mod- els such as DGCNN; overall,...
Read more >15813125.pdf - CS230 Deep Learning - Stanford University
This paper is looking to leverage these recent advances to better solve the problem of finding good grasp poses for any arbitrary object....
Read more >Geometric Feature Extraction of Point ... - ACS Publications
The moving speed of drugs and other transport objects in chemical reacto. ... Figure 17 presents the confusion matrix of DGCNN when k...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
A
batch
ofNone
refers tobatch_size==1
.It can be more computationally efficient to out-source the
kNN
calculation to the CPU (dependent on yourbatch_size
andnum_nodes
). This is just something I would test in practice via runtime measurement. We are also working on a betternum_workers
interface for batched computation.Thanks for providing the example code. Indeed, the major bottleneck of
DGCNN
is the dynamic edge creation viaknn
before each layer execution. It currently amounts to about 99% of the runtime of your script. We have plans to support a fasterknn
implementation in the future, but for now, there is nothing I can do about it.Your linked repository computes the
knn
graph via pairwise-distance computation followed by atopk
filtering. This approach can be, in fact, very fast, since it will trade runtime with memory consumption. In fact, the code will compute a potentially huge pair-wise distance matrix of shape[num_nodes, num_nodes]
, leading to better parallelization. Your linked repository has further differences: It expects examples to have the same number of points, which leads to faster runtime, but cannot be applied to examples of varying size (which are likely to happen in real-world scenarios).To answer your questions:
knn
computation rather than neural network execution.If you want to speed up your model, I suggest to utilize the
knn
implementation in the linked repository, and utilizeEdgeConv
instead ofDynamicEdgeConv
. Keep in mind that you need to reshape your feature matrix to[batch_size, num_nodes, num_features]
though.