question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

`DDIMScheduler` does not work unless `set_timesteps` is used

See original GitHub issue

Describe the bug

Need to set number of inference steps since “None” is default. Expected to have a default value.

Adding a very simple PR.

Reproduction

    scheduler = DDIMScheduler(
        beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear"
    )
    # skipping this step will cause an exception later
    # scheduler.set_timesteps(1000)
    pipeline = StableDiffusionPipeline.from_pretrained(
        model_name,
        scheduler=scheduler,
        use_auth_token=True,
    ).to(device)

Logs

>       prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps
E       TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'

System Info

diffusers v0.2.4
python 3.8.8

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:1
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
patrickvonplatencommented, Sep 13, 2022

Closing since questions have been answered. Please ping here @samedii if you still had some open questions

1reaction
patrickvonplatencommented, Aug 29, 2022

@samedii, actually quick question: How to do you exactly get the above error? The stable diffusion pipeline should always correctly set the number of timesteps, see: https://github.com/huggingface/diffusers/blob/efa773afd2a99f6041043298d9f3e8bcdaa325c7/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L121

How exactly do you get the error:

>       prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps
E       TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'

If you do:

    scheduler = DDIMScheduler(
        beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear"
    )
    # skipping this step will cause an exception later
    # scheduler.set_timesteps(1000)
    pipeline = StableDiffusionPipeline.from_pretrained(
        model_name,
        scheduler=scheduler,
        use_auth_token=True,
    ).to(device)
   pipeline("an image of a planet")

it should work correctly

Read more comments on GitHub >

github_iconTop Results From Across the Web

Schedulers - Hugging Face
If None , the timesteps are not moved. Sets the timesteps used for the diffusion chain. Supporting function to be run before inference....
Read more >
diffusers/scheduling_ddim.py at main · huggingface ... - GitHub
Output class for the scheduler's step function output. Args: prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, ...
Read more >
Diffusion Models - Vevesta
The DDIM scheduler allows the user to define how many denoising steps should be run at inference via the set_timesteps method. The DDPM ......
Read more >
afiaka87/clip-guided-diffusion – Run with an API on Replicate
Don't close the current tab or switch to e.g. the "Examples" while running. ... (default: 1000 ); prepending a number with ddim will...
Read more >
Stable Diffusion on Kaggle
If you want uncensored results, do not click that<br> ENABLE_NSFW_FILTER ... is only used with the DDIMScheduler, it will be ignored for other...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found