question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Stable Diffusion ONNX requires cpu `onnxruntime` even if CUDA version is installed

See original GitHub issue

Describe the bug

The ONNX support doesn’t work with CUDAExecutionProvider I installed onnxruntime-gpu Running import onnxruntime as ort ort.get_device() results GPU and ort.get_available_providers() results ['CPUExecutionProvider', 'TensorrtExecutionProvider', 'CUDAExecutionProvider'] but diffusers complains onnxruntime not installed and wants me to install the cpu version(pip install onnxruntime).

Reproduction

Install pip install onnxruntime-gpu

and run

from diffusers import StableDiffusionOnnxPipeline

pipe = StableDiffusionOnnxPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    revision="onnx",
    provider="CUDAExecutionProvider",
    use_auth_token=true,
)

Logs

No response

System Info

  • diffusers version: 0.3.0
  • Platform: Linux-5.10.133±x86_64-with-Ubuntu-18.04-bionic
  • Python version: 3.7.13
  • PyTorch version (GPU?): 1.12.1+cu113 (True)
  • Huggingface_hub version: 0.9.1
  • Transformers version: 4.21.3
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:8 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
SkyTNTcommented, Sep 11, 2022

This is my PR trying to fix this issues: #440

0reactions
patrickvonplatencommented, Oct 14, 2022

Thanks!

Read more comments on GitHub >

github_iconTop Results From Across the Web

PyTorch on Twitter: "In this tutorial, Microsoft walks through the ...
BTW - this tutorial runs your Pytorch models in Onnx runtime which supports both CPU and GPU. So, Torch is not installed in...
Read more >
DeepSpeed Integration - Hugging Face
DeepSpeed ZeRO-3 can be used for inference as well, since it allows huge models to be loaded on multiple GPUs, which won't be...
Read more >
How to Generate Images with Stable Diffusion in Seconds, for ...
Start a Vertex AI Notebook. The Stable Diffusion model is written in Pytorch and works best if you have more than 10 GB...
Read more >
I've created a How To Video on running Stable Diffusion on a ...
I am stuck here as well, but have run the pip install transformers command. Anyone have advice on how to get around this?...
Read more >
A Web UI for Stable Diffusion - Hacker News
I've been running the Intel CPU version [0] for a while now on a 2013 MacMini. Works fine; it takes several minutes per...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found