question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Community Pipelines] lpw_stable_diffusion.py incompatible with Diffusers v0.10.0

See original GitHub issue

Describe the bug

When trying to load Stable Diffusion v2.0 with Diffusers v0.10.0 you are greeted with error below

ValueError: Pipeline <class 'diffusers_modules.git.lpw_stable_diffusion.StableDiffusionLongPromptWeightingPipeline'> expected {'tokenizer', 'text_encoder', 'unet', 'vae', 'safety_checker', 'feature_extractor', 'scheduler'}, but only {'vae', 'tokenizer', 'safety_checker', 'text_encoder', 'scheduler', 'unet'} were passed.

Reproduction

  1. Load custom pipeline lpw_stable_diffusion in diffusers v0.10.0

Logs

D:\ProgramData\Anaconda3\lib\site-packages\diffusers\pipeline_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
    672         elif len(missing_modules) > 0:
    673             passed_modules = set(list(init_kwargs.keys()) + list(passed_class_obj.keys())) - optional_kwargs
--> 674             raise ValueError(
    675                 f"Pipeline {pipeline_class} expected {expected_modules}, but only {passed_modules} were passed."
    676             )

ValueError: Pipeline <class 'diffusers_modules.git.lpw_stable_diffusion.StableDiffusionLongPromptWeightingPipeline'> expected {'tokenizer', 'text_encoder', 'unet', 'vae', 'safety_checker', 'feature_extractor', 'scheduler'}, but only {'vae', 'tokenizer', 'safety_checker', 'text_encoder', 'scheduler', 'unet'} were passed.

System Info

  • diffusers version: 0.10.0.dev0
  • Platform: Windows-10-10.0.22621-SP0
  • Python version: 3.9.13
  • PyTorch version (GPU?): 1.12.1 (True)
  • Huggingface_hub version: 0.11.0
  • Transformers version: 4.24.0
  • Using GPU in script?: <fill in>
  • Using distributed or parallel set-up in script?: <fill in>

Issue Analytics

  • State:closed
  • Created 10 months ago
  • Comments:8 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
Skquarkcommented, Dec 4, 2022

A workaround fix for that is to add feature_extractor=None to your pipeline calls. Also if you’re using enable_attention_slicing, the lpw pipe is not updated with the fix for slice_size when using SD2. I have my own custom lpw pipeline with that updated on huggingface which you can load with custom_pipeline=“AlanB/lpw_stable_diffusion_mod” if you wanna borrow it until the official correction is make. I’m still having issues with 3 of the schedulers generating those ugly mosaic images using lpw with v2.0 and I don’t know the cause/fix for that problem yet… Just patiently waiting while the dust settles.

0reactions
anton-lcommented, Dec 12, 2022
Read more comments on GitHub >

github_iconTop Results From Across the Web

[Community Pipeline] fix lpw_stable_diffusion #1570 - GitHub
Successfully merging this pull request may close these issues. Custom pipeline : lpw_stable_diffusion is incompatible with new version of diffusers. 3 ...
Read more >
Custom Pipelines - Hugging Face
To load a custom pipeline you just need to pass the custom_pipeline argument to DiffusionPipeline , as one of the files in diffusers/examples/community...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found