question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Schedulers not compatible with OnnxStableDiffusionPipeline: TypeError: unsupported operand

See original GitHub issue

Describe the bug

Hi, I tried to use different schedulers with OnnxStableDiffusionPipeline, but it throw errors. Schedulers are not compatible with numpy used in onnx pipeline.

Onnx checkpoints converted with: convert_stable_diffusion_checkpoint_to_onnx.py

I have found a solution, but it is probably not optimal, because torch usage inside pipeline call.

Solution works with:

  • CompVis/stable-diffusion-v1-4
  • hakurei/waifu-diffusion

First error:

File "C:\...\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 152, in __call__
  latents = latents * self.scheduler.init_noise_sigma
TypeError: unsupported operand type(s) for *: 'numpy.ndarray' and 'Tensor'

If I cast latents to torch.tensor, before init_noise_sigma:

latents = torch.tensor(latents)
latents = latents * self.scheduler.init_noise_sigma
File "C:\...\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 171, in __call__
  noise_pred = self.unet(
File "C:\...\diffusers\onnx_utils.py", line 46, in __call__
  return self.model.run(None, inputs)
File "C:\...\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
  return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(double)) , expected: (tensor(int64))

If I add dtype=np.int64 to timestep in unet args:

# predict the noise residual
noise_pred = self.unet(
  sample=latent_model_input,
  timestep=np.array([t], dtype=np.int64),
  encoder_hidden_states=text_embeddings,
)
File "C:\...\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 184, in __call__
  latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
File "C:\...\diffusers\schedulers\scheduling_lms_discrete.py", line 224, in step
  pred_original_sample = sample - sigma * model_output
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'Tensor'

And if I cast latents to torch.tensor, before passing them into scheduler.step, it is working again:

# compute the previous noisy sample x_t -> x_t-1
latents = torch.tensor(latents)
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
latents = np.array(latents)

Reproduction

from diffusers import OnnxStableDiffusionPipeline, LMSDiscreteScheduler

lms = LMSDiscreteScheduler()

pipe = OnnxStableDiffusionPipeline.from_pretrained(
    model_path,
    provider="DmlExecutionProvider",
    scheduler=lms,
    local_files_only=True,
)

image = pipe("prompt")[0]

Logs

No response

System Info

  • diffusers version: 0.6.0

  • Platform: Windows-10-10.0.19044-SP0

  • Python version: 3.10.7

  • PyTorch version (GPU?): 1.12.1+cpu (False)

  • Huggingface_hub version: 0.10.1

  • Transformers version: 4.23.1

  • Using GPU in script?: DmlExecutionProvider

  • Using distributed or parallel set-up in script?: <fill in>

  • GPU: AMD RX 6900 XT 16GB

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:1
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
anton-lcommented, Nov 7, 2022

@averad https://github.com/huggingface/diffusers/pull/1173 should fix the scheduler issues once merged

1reaction
averadcommented, Nov 3, 2022

@anton-l you and your team members are the best, thank you.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How do I resolve typeError: unsupported operand type(s) for ...
It's because you cannot use the / operator between strings. You should either use this: "DIRS": [os.path.join(BASE_DIR, "templates")].
Read more >
Error "unsupported operand type(s) for +: 'int' and 'NoneType ...
As the script works for some files and not for others, I suspect the problem is originated somewhere in the data (or handling...
Read more >
TypeError: unsupported operand type(s) for //: 'int' and 'list' #29
Running the example code on the model card gives an error: 5 # Use the Euler scheduler here instead 6 scheduler ...
Read more >
9/9 unsupported operand type(s) for ** | Codecademy
Now it gives me Traceback (most recent call last): File "python", line 36, in File "python", line 28, in grades_std_deviation TypeError: unsupported operand...
Read more >
TypeError: unsupported operand type(s) for -: 'str' and 'int'
This tutorial explains how to fix the following error in Python: TypeError: unsupported operand type(s) for -: 'str' and 'int'.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found