question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Did anyone get good CIFAR10 results?

See original GitHub issue

Hi, thanks for providing this code. I’m trying to reproduce the CIFAR10 results from the original DDPM paper. I use 3x32x32 images, all the CIFAR data (50k frames), 2000 epochs (but I check every 100 epochs how it looks like), and I get some similar results, but not as good as the paper. This is the result that I get: image I’m also attaching the training results that I get (the divergent one is the validation loss): image My training schedule is similar to the original except that I maximize the batch size on my GPUs. I’m using image size of 32, and U-Net options dim=64, dim_mults=(1,2,4,8).

Was anyone more successful and can share their results and tips? I think that this result is far from perfect. Thanks very much, I hope you could help me find what I’m missing.

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:5

github_iconTop GitHub Comments

3reactions
yiyixuxucommented, Oct 4, 2022

you can try p2_loss_weight_gamma = 1. , my result with cifar10 wasn’t great either with the default setting but I think you can see a big difference with p2 weighting - the original paper used a reweighted loss too

0reactions
tcapellecommented, Nov 5, 2022

I do here, but with a different codebase: https://wandb.ai/capecape/train_sd/reports/How-to-Train-a-Conditional-Diffusion-Model-from-Scratch–VmlldzoyNzIzNTQ1

Sent from ProtonMail mobile

-------- Original Message -------- On Nov 5, 2022, 6:30 PM, Michael Albergo < @.***> wrote:

does someone have example code for mnist/cifar10?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.AEMWOAMOBC4DFILSUZCKI6TWG2KT5A5CNFSM6AAAAAAQZTJ7GOWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTSNYKH4M.gifMessage ID: @.***>

Read more comments on GitHub >

github_iconTop Results From Across the Web

CIFAR-10 Benchmark (Image Classification) - Papers With Code
Rank Model Percentage correct PARAMS Year Tags 1 ViT‑H/14 99.5 632M 2020 Transformer 2 µ2Net (ViT‑L/16) 99.49 2022 3 ViT‑L/16 99.42 307M 2020 Transformer
Read more >
Tutorial 2: 94% accuracy on Cifar10 in 2 minutes - Medium
In this tutorial, the mission is to reach 94% accuracy on Cifar10, which is reportedly human-level performance. In other words, getting >94% accuracy...
Read more >
CIFAR10 CNN Model 85.97 Accuracy - Kaggle
This notebook is the result of a series of experiments I conducted on the CIFAR-10 dataset to understand hyperparameter tuning of a Convolutional...
Read more >
CIFAR-10 Can't get above 60% Accuracy, Keras with ...
Note that MNIST is a much simpler problem set than CIFAR-10, and you can get 98% from a fully-connected (non-convolutional) NNet with very ......
Read more >
CIFAR10: 94% Of Accuracy By 50 Epochs With End-to-End ...
This article is developed to help Computer Vision beginners in getting a adequate grasp of working procedure for a Image Classification ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found