[Roadmap] 0.7 Release Plan
See original GitHub issueAs usual, we want to first thank all the contributors. In the past 0.6 release, we have received 69 PRs from 33 new contributors! 11 new GNN examples are added to the repository adding the total number to 70. Let’s also congratulate @nv-dlasalle, who has been actively improving many of DGL’s core GPU utilities, on becoming the first community committer. If you also wish to become a DGL committer, don’t hesitate to contribute to DGL today.
We have planned the following new features for 0.7:
- [Doc] A tutorial for training a node classification model on multiple GPUs in a single machine
- [Doc] A tutorial for training a graph classification model on multiple GPUs in a single machine
- [Doc] A tutorial for training a node classification model on multiple machines
- [Doc] Expand the blitz introduction series with tutorials for heterogeneous graphs
- [Core] Differentiable sparse-sparse adjacency matrix multiplication wrapped in graph semantic
- [Core] Differentiable sparse-sparse adjacency matrix addition wrapped in graph semantic
- [Core] PyTorch Lightning support
- [Core] Graclus pooling
- [Core] Basied neighbor sampling by node type
- [Core] Sweep all subgraph APIs and correct any inconsistent behaviors.
- [Core] Speedup DGLGraph construction for graph classification task
- [Core] Support creating DGLGraph directly from CSR/CSC
- [Core] A new API for sorting graph based on src/dst
- [Core] Enable NCCL for distributed sparse embedding across GPUs
- [Heterograph] Extend
update_all
to heterograph when both message reductions are summation. - [Heterograph] Unify the current two RGCN implementations.
- [Heterograph] HGT NN module
- [Distributed] Distributed embedding with synchronized gradient updates
- [Distributed] Allow killing all training jobs by keyboard signals (e.g., ctrl+c)
- [Distributed] Support computing out-degrees
- [Distributed][Doc] Doc for how to preprocess graph for link prediction
- [Model] TGN
- [Model] TGAT
- [Model] GraphSIM
- [Model] InfoGraph
- [Model] MVGRL
- [Model] DimeNet / DimeNet++
- [Model] GRACE
- [Model] DeeperGCN
- [Model] GraphSAINT
- [Model] node2vec
- [Model] C&S
- (Experimental) Utilities for visualizing GNNs by GNNVis
We warmly welcome any help from the community. Feel free to leave any comments.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:8
- Comments:7 (2 by maintainers)
Top Results From Across the Web
Roadmap - Neovim
This roadmap gives an overview of the project direction. A detailed list of features planned or under consideration can be found in the...
Read more >Product Roadmap vs. Release Plan: Key Differences
A product roadmap communicates the high-level overview of a product's strategy, while a release plan is a tactical document designed to capture and...
Read more >Roadmap - WordPress.org
Here are the current planned releases, and links to their respective milestones in our issue tracker. Any projected dates are for discussion and...
Read more >Microsoft 365 Roadmap
The Microsoft 365 Roadmap lists updates that are currently planned for applicable subscribers. Check here for more information on the status of new...
Read more >Product Roadmap Guidelines - Aktia Solutions
We need timeframes which indicate the level of uncertainty but specific dates are left out to the product backlog and release plan.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hi @licj15 , we don’t know ROCm very well so currently there is no plan to support that. We welcome any suggestions and discussions and are willing to see an RFC on the related topic.
Can mixed precision be accessed without having to compile it from source? I believe that Pytorch lightning has a parameter fp16 = True. Maybe make it available through
[Core] PyTorch Lightning support
? Thanks for all the hard work!