AttributeError: module 'torch.distributed' has no attribute 'is_initialized'
See original GitHub issue🐛 Bug
When running the webcam demo on macOS 10.14.1 as so:
python webcam.py --min-image-size 300 MODEL.DEVICE cpu
I get the error:
AttributeError: module 'torch.distributed' has no attribute 'is_initialized'
To Reproduce
Steps to reproduce the behavior:
- Followed install guide here: https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/INSTALL.md
- Use the following command:
python webcam.py --min-image-size 300 MODEL.DEVICE cpu
Expected behavior
I expect this to run the webcam demo.
Environment
- PyTorch Version (e.g., 1.0): 1.0 (nightly)
- OS (e.g., Linux): macOS 10.14.1
- How you installed PyTorch (
conda
,pip
, source): Conda - Build command you used (if compiling from source):
- Python version: 3.7
- CUDA/cuDNN version: N/A
- GPU models and configuration: N/A
- Any other relevant information:
Additional context
I fixed this issue by changing all instances of is_initialized()
to is_available()
in maskrcnn-benchmark/maskrcnn_benchmark/utils/comm.py
. I’m not sure if this is what is intended or if I am installing something incorrectly. After making that change, I was able to run the webcam demo.
Issue Analytics
- State:
- Created 5 years ago
- Comments:9 (8 by maintainers)
Top Results From Across the Web
module 'torch.distributed' has no attribute 'is_initialized' in ...
In order to solve this problem. Actually Window and Mac doesn't support distributed training facility. so this issue is occuring.
Read more >Module 'torch.distributed' has no attribute 'is_initialized'
I am running inference using mmdetection (https://github.com/open-mmlab/mmdetection) and I get the above error for this piece of code; ...
Read more >Distributed error. module 'torch.distributed' has no attribute ...
I've installed pytorch 1.0 on windows. When I try to use webcam demo provided by maskrcnn-benchmark. An error occured: Traceback (most ...
Read more >Distributed RPC Framework — PyTorch 1.13 documentation
Not all features of the RPC package are yet compatible with CUDA support and thus their use ... torch.distributed.rpc.init_rpc(name, backend=None, rank=- 1, ...
Read more >torchrun (Elastic Launch) — PyTorch 1.13 documentation
Transitioning from torch.distributed.launch to torchrun ... a Torch process group are provided to you by this module, no need for you to pass...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’ve got the same error. You could check what
torch.distributed.is_available()
returns. IfFalse
, it’s obvious that your pytorch version doesn’t support distributing training, and the code may face some error@apacha good catch! Could you send a PR fixing it?
Maybe a better solution would be to use https://github.com/facebookresearch/maskrcnn-benchmark/blob/1818bb2f457c082dfd3759e6331e40f97d3059c7/maskrcnn_benchmark/utils/comm.py#L13-L18