`stable_diffusion_mega` pipeline doesn't work for img2img and inpainting
See original GitHub issueDescribe the bug
Hello, I runed the sample code of stable-diffusion-mega and there’s a mismatch between parameter names of new and legacy pipelines.
It seems that Mega pipeline uses image
for img2img
and inpaint
, but the legacy pipeline uses init_image
.
Reproduction
from diffusers import DiffusionPipeline
import PIL
import requests
from io import BytesIO
import torch
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_mega", torch_dtype=torch.float16, revision="fp16")
pipe.to("cuda")
pipe.enable_attention_slicing()
init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
prompt = "A fantasy landscape, trending on artstation"
images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
Logs
Traceback (most recent call last):
File "D:\StableDiffusion\test.py", line 18, in <module>
images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
File "C:\Users\*\anaconda3\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\wnsdh/.cache\huggingface\modules\diffusers_modules\git\stable_diffusion_mega.py", line 177, in img2img
return StableDiffusionImg2ImgPipeline(**self.components)(
File "C:\Users\wnsdh*anaconda3\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
TypeError: __call__() missing 1 required positional argument: 'init_image'
System Info
diffusers
version: 0.9.0- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.8
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Huggingface_hub version: 0.11.1
- Transformers version: 4.25.1
- Using GPU in script?: RTX 3090
- Using distributed or parallel set-up in script?: No
Issue Analytics
- State:
- Created 9 months ago
- Comments:7 (4 by maintainers)
Top Results From Across the Web
Merging Stable diffusion pipelines just makes sense #551
Following the Philosophy, it has been decided to keep different pipelines for Stable Diffusion for txt-to-img, img-to-img and inpainting.
Read more >Stable diffusion pipelines - Hugging Face
Pipeline for text-guided image to image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for ...
Read more >A stable diffusion frontend with inpainting, img2img, and more ...
Note that currently it only works locally and the button must be pressed after generating an image (so that the local pipeline is...
Read more >EASY! Stable Diffusion 1.5 Photo bashing with ... - YouTube
How to Photobash using Stable Diffusion 1.5 Inpainting ... Stable Diffusion 1.5 Photo bashing with img2img 1.5 Inpainting Model - FULL GUIDE.
Read more >Daniel Eckler on Twitter: "Stable Diffusion is only 30 days old ...
The Stable Diffusion innovation just doesn't stop! Here's a new open-source model from the @monaverse that produces seamless tiling images: ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Just change line30 to:
images = pipe(prompt=prompt, image=out, strength=strength, guidance_scale=guidance_scale).images
Because the new argument is image, not init_image.let me know if you have any other question
@pedrogengo thank you so much!!