question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Mode collapse and batch normalization

See original GitHub issue

Thanks for your implementation. I cloned this repo and saw the collapse during training tf-3dgan/src/3dgan_mit_biasfree.py, shown as below: image

I tried to fix this problem by training lower dimension dataset (e.g. celebA) with the same model architecture. I found if I replace tf.contrib.layers.batch_norm with this batchnormalize function, my training result will be better. For example in celebA: (with batch size=100) image

Even I replaced with tf.layers.batch_normalization, the result was bad similar to tf.contrib.layers.batch_norm.

I don’t know why the Tensorflow BN layer doesn’t work. Do you have any idea? Thank you so much.

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Reactions:1
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
BassyKuocommented, Aug 6, 2017

Sorry for the late reply. Yes I used tf.layers.batch_normalization actually. Here is the training produce I got: image

I also sent my branch to you. Thank you!

1reaction
BassyKuocommented, Aug 4, 2017

I add these lines and it really works in celebA! image But it doesn’t have an obvious change in 3D generating. Thanks for your help.

Read more comments on GitHub >

github_iconTop Results From Across the Web

On the Effects of Batch and Weight Normalization in ...
However GANs are known to be very hard to train, suffering from problems such as mode collapse and disturbing visual artifacts. Batch normalization...
Read more >
Training Faster by Separating Modes of Variation in ... - PubMed
Batch Normalization (BN) is essential to effectively train state-of-the-art deep Convolutional ... where "mode collapse" hinders the training process.
Read more >
Generative Adversarial Networks 102: DCGAN & Mode Collapse
In GANs, batch normalization was shown to help prevent mode collapse, which we will talk about shortly. The key insight, however, was to...
Read more >
Mode collapse and batch normalization · Issue #12 - GitHub
I think in the latest API docs, one needs to update the batch estimates as every minibatch is passed. The docs say: update_ops...
Read more >
Ways to improve GAN performance | by Jonathan Hui
Feature matching is effective when the GAN model is unstable during training. Minibatch discrimination. When mode collapses, all images created looks similar.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found