Why does train_text_to_image.py perform so differently from the CompVis script?
See original GitHub issueI posted about this on the forum but didn’t get any useful feedback - would love to hear from someone who knows the in and outs of the diffusers codebase!
https://discuss.huggingface.co/t/discrepancies-between-compvis-and-diffuser-fine-tuning/25556
To summarize the post: the train_text_to_image.py
script and original CompVis repo perform very differently when fine-tuning on the same dataset with the same hyperparameters. I’m trying to reproduce the Lamda Labs Pokemon fine-tuning results and finding difficulty doing so (picture results in forum post).
I’ve been digging into the implementations and I’m not noticing any obvious differences in how the models are trained, losses are calculated, etc - so what explains the large behavioral discrepancies?
Would really appreciate any insight on what might be causing this.
Issue Analytics
- State:
- Created 10 months ago
- Comments:12 (8 by maintainers)
Top GitHub Comments
Thanks for posting the detailed issue @john-sungjin !
As you said, the implementation is very similar to the compvis one. The one difference that I’m aware of is that, in the compvis script, for example the Pokemon fine-tuning script, the model is initialised from the
sd-v1-4-full-ema.ckpt
checkpoint, so it loads the non-ema weights for training and ema weights for doing ema. While in diffusers script the ema checkpoint is used for both training and EMA.I am going to add an option which enables loading both the non-ema (for training) and ema (for EMA updates) in diffusers script and then compare again. Will report here as soon as possible 😃
Going to update the script soon, I am getting good results with script now, see for example the emoji model