question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Possible bug in latent vector loss calculation?

See original GitHub issue

I’m confused by this, and wondering if it could be a bug? It seems as though latents is of size (32,128), which means that for array in latents: iterates 32 times. However, the results from these iterations aren’t stored anywhere, so they are at best a waste of time and at worst causing a miscalculation. Perhaps the intention was to accumulate the kurtoses and skews for each array in latents, and then computing lat_loss using all the accumulated values?

for array in latents:
    mean = torch.mean(array)
    diffs = array - mean
    var = torch.mean(torch.pow(diffs, 2.0))
    std = torch.pow(var, 0.5)
    zscores = diffs / std
    skews = torch.mean(torch.pow(zscores, 3.0))
    kurtoses = torch.mean(torch.pow(zscores, 4.0)) - 3.0

lat_loss = lat_loss + torch.abs(kurtoses) / num_latents + torch.abs(skews) / num_latents

Occurs at https://github.com/lucidrains/big-sleep/blob/main/big_sleep/big_sleep.py#L211

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:14 (9 by maintainers)

github_iconTop GitHub Comments

2reactions
walmsleycommented, Feb 15, 2021

That fully addresses the above points, thanks!

BUT also perhaps most significantly, there is a deeper possible bug that I’m curious about. I was trying to understand why num_latents is set to 32 here: https://github.com/lucidrains/big-sleep/blob/31fa846e0e300515360fbeb67a0315f20505fd59/big_sleep/big_sleep.py#L89-L92

Diving deeper, it seems as though these 32 different vectors are only actually used within cond_vector here: https://github.com/lucidrains/big-sleep/blob/31fa846e0e300515360fbeb67a0315f20505fd59/big_sleep/biggan.py#L510 and here: https://github.com/lucidrains/big-sleep/blob/31fa846e0e300515360fbeb67a0315f20505fd59/big_sleep/biggan.py#L520

I debugged the loop surrounding line 520 above (using the current 512px BigGAN model), and found that the model only actually contains 15 layers; of those 15, only 14 of those layers are GenBlock layers which trigger line 520.

The result is that of the 32 latent vectors we create, only indices {0,1,2,3,4,5,6,7,8,10,11,12,13,14,15} are actually ever used. This wouldn’t be a problem, except that the remaining 17 unused latent vectors may still be influencing the loss calculation. I’m still trying to work out whether their influence on the loss calculation is significant enough to merit fixing this, because the fix would be slightly nontrivial as it varies depending on the size of BigGAN model chosen.

1reaction
walmsleycommented, Feb 16, 2021

Created new PR with proposed final fix @ https://github.com/lucidrains/big-sleep/pull/35

Overall status of 4 possible bugs mentioned in this issue:

  • kurtosis/skew accumulation (fixed in release 0.5.1)
  • latent loss mean(dim=1) (fixed in release 0.5.2)
  • class loss topk[0] (not a bug, no fix needed)
  • num_latents 32 -> 15 (fixed in release 0.5.3)
Read more comments on GitHub >

github_iconTop Results From Across the Web

Clustering the latent vector in an auto-encoder - Cross Validated
In the loss function we consider KL divergence between a normal prior with zero mean and diagonal standard deviation and the latent distribution ......
Read more >
StyleGAN v2: notes on training and latent space exploration
In this regard, Shawn Presser pointed out on Twitter that v2 has actually a bug, for which the generator lr value is incorrectly...
Read more >
How to Explore the GAN Latent Space When Generating Faces
Yet, the latent space has structure that can be explored, such as by interpolating between points and performing vector arithmetic between ...
Read more >
7 Latent Discrete Parameters | Stan User's Guide
7 Latent Discrete Parameters. Stan does not support sampling discrete parameters. So it is not possible to directly translate BUGS or JAGS models...
Read more >
Tutorial 9: Deep Autoencoders - UvA DL Notebooks
The feature vector is called the “bottleneck” of the network as we aim to compress ... img2, title_prefix=""): # Calculate MSE loss between...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found