Error import _LazyBatchNorm
See original GitHub issuefrom torch.nn.modules.batchnorm import _BatchNorm, _LazyBatchNorm ImportError: cannot import name '_LazyBatchNorm' from 'torch.nn.modules.batchnorm' (/Users/xx/.pyenv/versions/3.9.6/lib/python3.9/site-packages/torch/nn/modules/batchnorm.py)
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Potential issue converting batch normalisation from ... - GitHub
When I include batch normalisation in the GNN of the example hetero_link_pred.py, an error is thrown in the first model call of the...
Read more >LazyBatchNorm2d — PyTorch 1.13 documentation
BatchNorm2d module with lazy initialization of the num_features argument of the BatchNorm2d that is inferred from the input.size(1) .
Read more >8.5. Batch Normalization - Dive into Deep Learning
In this section, we describe batch normalization, a popular and effective ... The variable magnitudes for intermediate layers cannot diverge during training ...
Read more >AttributeError: module 'tensorboard' has no attribute 'lazy'
I'm using tensorboard, but getting this error. import tensorboard.lazy as _lazy AttributeError: module 'tensorboard' has no attribute 'lazy' ...
Read more >How can I fix this expected CUDA got CPU error in PyTorch?
You are using nn.BatchNorm2d in a wrong way. BatchNorm is a layer, just like Conv2d. It has internal parameters and buffers.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
If you are using torch==1.10.0, this is resolved by #23 to onnx2pytorch (https://github.com/ToriML/onnx2pytorch/pull/23).
@ZiyuBao – I think you misunderstood my comment. #23 is my PR to onnx2pytorch, not a PR to PyTorch. Did you try fetching my PR to onnx2pytorch? (In other words, the issue is caused by torch==1.10.0, but if you’re using 1.10.0, it’ll be fixed by #23 to onnx2pytorch.)