question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Equalized learning rate doesn't seem to be implemented

See original GitHub issue

It’s mentioned briefly in stylegan2 paper’s appendix b, but it is described in Progressive Growing of GANs in detail in section 4.1.

It’s done for both conv and linear layers, I found another stylegan2 implementation doing this here.

This might help explain part of the FID gap between this implementation and the official implementation, I’d be happy to do a test run on FFHQ thumbnails to see if it helps.

I have some test results for the latest of this repo vs the official code as far as FID goes, which is what prompted this issue. Training data used was FFHQ thumbnails (128x128) I have trained the official stylegan2 for 1800kimg (~56k iterations at batch size 32), and got a final FID score of ~11.8 I trained this repo for 50k iterations and got a final FID score of ~43.0 (using torch-fidelity).

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:2
  • Comments:19 (11 by maintainers)

github_iconTop GitHub Comments

1reaction
Errolercommented, May 23, 2021

I have created a StyleGAN2 implementation inspired by this repository. On my experience, equalized learning rate is incredibly important for high resolution training (256 and above).

1reaction
bob80333commented, Aug 18, 2020

Starting a training run on version 0.19.1 with --lr-mlp 0.01, since that’s the value used in the paper. After this, I’d be interested in testing the equalized learning rate (normal distribution init + kaiming layer constant scaling of weights during runtime).

Read more comments on GitHub >

github_iconTop Results From Across the Web

GAN Tricks: Equalized Learning Rate - Personal Record
Generalised implementation in Pytorch​​ Equalized Learning Rate is a trick that was introduced in the works on the progressive growing of GANs ...
Read more >
Boosting: why is the learning rate called a regularization ...
The first way is "large learning rate" and few iterations. ... This is why small learning rate is sort of equal to "more...
Read more >
arXiv:1904.06145v2 [cs.LG] 20 Feb 2020
PixelNorm (PN) or Equalized Learning Rate (EQLR) alone or in combination (top) will lead to poor FID even with KL margin (bottom left)....
Read more >
Equalized Learning Rate - YouTube
Explanation and implementation details of Equalized Learning Ratewith and without learning rate multiplierReferencesProgressive Growing of ...
Read more >
How to train your MAML - OpenReview
Abstract: The field of few-shot learning has recently seen substantial ... on meta-learning learning rate and the proposed approach does not seem to...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found