`norm='batch'` in UNet cause int64 weights
See original GitHub issueDescribe the bug
When set the norm of UNet to batch
, it will cause “model.0.conv.unit0.conv.weight” to be float64 data.
While instance
norm is fine.
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (5 by maintainers)
Top Results From Across the Web
Normalization Techniques in Deep Neural Networks - Medium
We are going to study Batch Norm, Weight Norm, Layer Norm, Instance Norm, Group Norm, Batch-Instance Norm, Switchable Norm. Let's start with the...
Read more >Batch Norm Explained Visually — How it works, and why ...
We end up making a larger update to one weight due to its large gradient. This causes the gradient descent to bounce to...
Read more >L1-Norm Batch Normalization for Efficient Training of ... - arXiv
norm of the incoming weights to normalize the summed inputs to a neuron. ... However, the BN layer usually causes considerable training.
Read more >Group Norm, Batch Norm, Instance Norm, which is better
From the curves of the original papers, we can conclude: BN layers lead to faster convergence and higher accuracy. BN layers allow higher ......
Read more >A Gentle Introduction to Batch Normalization for Deep Neural ...
This can cause the learning algorithm to forever chase a moving target. ... For example, the weights of a layer are updated given...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Batchnorm seems to create LongTensors which causes the issue. See this simple test script comparing UNet with instance norm vs. batch norm:
Output:
Hi @holgerroth ,
Actually, this int64 variable comes from PyTorch source code of batch norm: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/batchnorm.py#L54 You can removed by below settings in your UNet args:
I already verified locally, will also enhance our unit tests to cover it. Could you please help double confirm?
Thanks.