TypeError: OnnxStableDiffusionPipeline.__init__() missing 1 required positional argument: 'vae_encoder'
See original GitHub issueDescribe the bug
Hi,
I tried ONNX Runtime for inference. The code is,
from diffusers import StableDiffusionOnnxPipeline pipe = StableDiffusionOnnxPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="onnx", provider="CPUExecutionProvider", use_auth_token=True, ) prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0]
**Error is, Traceback (most recent call last): File “Python-work\stable_diffuser_onnx_compvix.py”, line 3, in <module> pipe = StableDiffusionOnnxPipeline.from_pretrained( File “onnx-virtual\lib\site-packages\diffusers\pipeline_utils.py”, line 647, in from_pretrained model = pipeline_class(init_kwargs) File “onnx-virtual\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py”, line 272, in init super().init( TypeError: OnnxStableDiffusionPipeline.init() missing 1 required positional argument: ‘vae_encoder’
Kindly help me to fix this error
Reproduction
I tried this with virtual environment and python 3.10
Logs
No response
System Info
Environment is windows, python 3.10
Issue Analytics
- State:
- Created 10 months ago
- Comments:12 (2 by maintainers)
Top GitHub Comments
@kamalasubha this issue should be fixed in
diffusers>=0.8
. Try installing the latest version, then this will work:@kamalasubha Don’t use revision=“onnx” - you are already calling from an onnx pipeline! And also you might want to install ort nightly directml and use DmlExecutionProvider instead of CPUExecutionProvider - it works on my RX 560 4G although it uses RAM as shared memory (which slows things quite a bit) to compensate the lack of VRAM - still 5 times faster than my CPU. If your GPU has 4G or more then you def should use DmlExecutionProvider. Take a look at this issue I opened as it contains the link for it as well as a fix to a possible problem you might encounter (so far I seem to be the only one though) - don’t forget you must grab the version specific for your python version.
EDIT: Forgot to ask - what scheduler are you using? I find that the default one doesn’t work well for custom models. DDIM works great for them. For reference here is how I’m doing:
Then you assign them like so:
pipe.scheduler = DDIM
EDIT2: Somehow noticed this right before I was about to leave: you are trying to load the model ‘model_onnx’ even though when you converted it was ‘eldenring_v2_pruned_onnx’ - is that a mistake ?