question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Does RePaintPipeline work with Stable diffusion?

See original GitHub issue

Hello, First of all, thank you so much for adding RePaintPipeline. This pipeline works much better than stable diffusion inpainting when I use DDPMs (such as ddpm-ema-bedroom-256 and ddpm-bedroom-256) . However, when I use CompVis/stable-diffusion-v1-4, some bugs appear. For example, I get the following error:

TypeError: set_timesteps() takes from 2 to 3 positional arguments but 5 were given

I wonder is it possible to use stable diffusion as its generator? Here, is my code:

scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)

pipe = RePaintPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler) 
pipe = pipe.to("cuda")

generator = torch.Generator(device="cuda").manual_seed(0)
output = pipe(
    original_image=img,
    mask_image=msk,
    num_inference_steps=250,
    eta=0.0,
    jump_length=10,  
    jump_n_sample=10,
    generator=generator,
)
inpainted_image = output.images[0]

Issue Analytics

  • State:closed
  • Created 10 months ago
  • Comments:6 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
Revistcommented, Nov 9, 2022

Hi @FBehrad,

The thing is you should supply repaint scheduler to the pipeline not LMSDescrete. In the pr https://github.com/huggingface/diffusers/pull/974 there was discussion about making repaint usable with all schedulers, but long story short for now it works only with DDIM and (because DDIM generalizes DDPM) DDPM.

Take a look here https://github.com/huggingface/diffusers/blob/main/tests/pipelines/repaint/test_repaint.py

0reactions
Randolph-zengcommented, Dec 8, 2022

Incase anyone here is interested in the progress of this feature request, there is a community contribution PR going on right now, but still I believe we might need some help to figure out why it is not working as expected, any help would be appreciated : ) https://github.com/huggingface/diffusers/issues/1333#issuecomment-1342193203 https://github.com/huggingface/diffusers/issues/1602

Read more comments on GitHub >

github_iconTop Results From Across the Web

Stable diffusion pipelines - Hugging Face
Pipeline for text-guided image inpainting using Stable Diffusion. This is an experimental feature. This model inherits from DiffusionPipeline.
Read more >
[Community Pipeline] RePaint + Stable Diffusion #1333 - GitHub
At the moment the RePaint pipeline only works for image diffusion, as opposed to latent diffusion (as in Stable Diffision). But making the...
Read more >
What is Stable Diffusion and why should you care? - LinkedIn
Stability AI is a text-to-image conversion model which enables billions of users to produce amazing works quickly. This model uses an inflexible ...
Read more >
How does Stable Diffusion work? - YouTube
StableDiffusion explained. How does an AI generate images from text? How do Latent Diffusion Models work ? If you want answers to these ......
Read more >
How to run Stable Diffusion at Home - NO CODE - YouTube
Your browser can 't play this video. Learn more. Switch camera.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found