Stable Diffusion ONNX requires cpu `onnxruntime` even if CUDA version is installed
See original GitHub issueDescribe the bug
The ONNX support doesn’t work with CUDAExecutionProvider
I installed onnxruntime-gpu
Running
import onnxruntime as ort ort.get_device()
results
GPU
and
ort.get_available_providers()
results
['CPUExecutionProvider', 'TensorrtExecutionProvider', 'CUDAExecutionProvider']
but diffusers complains onnxruntime not installed and wants me to install the cpu version(pip install onnxruntime).
Reproduction
Install
pip install onnxruntime-gpu
and run
from diffusers import StableDiffusionOnnxPipeline
pipe = StableDiffusionOnnxPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
revision="onnx",
provider="CUDAExecutionProvider",
use_auth_token=true,
)
Logs
No response
System Info
diffusers
version: 0.3.0- Platform: Linux-5.10.133±x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Huggingface_hub version: 0.9.1
- Transformers version: 4.21.3
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Issue Analytics
- State:
- Created a year ago
- Comments:8 (6 by maintainers)
Top Results From Across the Web
PyTorch on Twitter: "In this tutorial, Microsoft walks through the ...
BTW - this tutorial runs your Pytorch models in Onnx runtime which supports both CPU and GPU. So, Torch is not installed in...
Read more >DeepSpeed Integration - Hugging Face
DeepSpeed ZeRO-3 can be used for inference as well, since it allows huge models to be loaded on multiple GPUs, which won't be...
Read more >How to Generate Images with Stable Diffusion in Seconds, for ...
Start a Vertex AI Notebook. The Stable Diffusion model is written in Pytorch and works best if you have more than 10 GB...
Read more >I've created a How To Video on running Stable Diffusion on a ...
I am stuck here as well, but have run the pip install transformers command. Anyone have advice on how to get around this?...
Read more >A Web UI for Stable Diffusion - Hacker News
I've been running the Intel CPU version [0] for a while now on a 2013 MacMini. Works fine; it takes several minutes per...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
This is my PR trying to fix this issues: #440
Thanks!