Support more backends in distributed operations
See original GitHub issueThe current version of MONAI utilizes torch.distributed
for all_gather
, all_reduce
etc., e.g. in line 409 below. However, this will raise an Error when I launch the parallel programs in ignite.distributed.Parallel
context with horovod
or xla
backend: RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
Is it better to replace all the torch.distributed
operations with ignite.distributed
ones? They have naive support for all the backends.
Issue Analytics
- State:
- Created 2 years ago
- Comments:11 (10 by maintainers)
Top Results From Across the Web
The Case for Custom Storage Backends in Distributed ...
This is a preferred choice for most distributed file sy... ... Third, supporting emerging storage hardware is painstakingly slow.
Read more >The Case for Custom Storage Backends in Distributed
This is a preferred choice for most distributed file systems today, ... An emerging requirement for storage backends is support for novel ...
Read more >The Case for Custom Storage Backends in ... - ResearchGate
This is a preferred choice for most distributed file systems today, because it allows ... Third, supporting emerging storage hardware is painstakingly slow....
Read more >Backend-to-Backend Communication - Level Up Coding
Any distributed system resolves around sharing information between its distinct components, frontend or backend alike.
Read more >[RFE] Add support for distributed CPU-backend mode #11182
Unless I am mistaken, it is only possible to use the distributed backend (initialised with jax.distributed.initialize ) with the GPU and TPU ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@sandylaker for XLA quick checking we can use Google Colab and specify TPU as accelerator
Hi @sandylaker ,
Cool! Really appreciate your help here!
Thanks.