Equalized learning rate doesn't seem to be implemented
See original GitHub issueIt’s mentioned briefly in stylegan2 paper’s appendix b, but it is described in Progressive Growing of GANs in detail in section 4.1.
It’s done for both conv and linear layers, I found another stylegan2 implementation doing this here.
This might help explain part of the FID gap between this implementation and the official implementation, I’d be happy to do a test run on FFHQ thumbnails to see if it helps.
I have some test results for the latest of this repo vs the official code as far as FID goes, which is what prompted this issue.
Training data used was FFHQ thumbnails (128x128)
I have trained the official stylegan2 for 1800kimg (~56k iterations at batch size 32), and got a final FID score of ~11.8
I trained this repo for 50k iterations and got a final FID score of ~43.0
(using torch-fidelity).
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:19 (11 by maintainers)
Top GitHub Comments
I have created a StyleGAN2 implementation inspired by this repository. On my experience, equalized learning rate is incredibly important for high resolution training (256 and above).
Starting a training run on version
0.19.1
with--lr-mlp 0.01
, since that’s the value used in the paper. After this, I’d be interested in testing the equalized learning rate (normal distribution init + kaiming layer constant scaling of weights during runtime).