SyncBN?
See original GitHub issueHi @HobbitLong
I see you use SyncBN
from apex to train with DataParallel
, however, SyncBN
seems to be designed with DistributedDataParallel
. Could you please confirm if SyncBN
works in this case?
Best, Jizong
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (1 by maintainers)
Top Results From Across the Web
SyncBN Explained | Papers With Code
Synchronized Batch Normalization (SyncBN) is a type of batch normalization used for multi-GPU training. Standard batch normalization only normalizes the ...
Read more >pytorch-syncbn - GitHub
This is an alternative implementation of "Synchronized Multi-GPU Batch Normalization" which computes global stats across GPUs instead of locally computed.
Read more >SyncBatchNorm — PyTorch 1.13 documentation
Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch ...
Read more >Implementing Synchronized Multi-GPU Batch Normalization
In this tutorial, we discuss the implementation detail of Multi-GPU Batch Normalization (BN) (classic implementation: encoding.nn.BatchNorm2d . We will provide ...
Read more >SyncBN layer for data parallelism - IDRIS
The SyncBN layers enable hardware synchronisation (during the data parallelisation) for the calculation of normalization factors.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@jizongFox, I did not realize it. Maybe you are right then that the code probably only used normal BN.
SyncBN可能都没起到作用,论文的数据不就是有问题的吗