question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

onnx inpainting error

See original GitHub issue

Describe the bug

With the latest code, I was able to convert the SD1.4 checkpoint into onnx and successfully run txt2img and img2img using the new onnx pipelines. However the onnx inpainting isn’t working.

Thank you!

Reproduction

from diffusers import OnnxStableDiffusionInpaintPipeline
import io, requests, PIL

def download_image(url):
    response = requests.get(url)
    return PIL.Image.open(io.BytesIO(response.content)).convert("RGB")

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))



prompt = "a cat sitting on a bench"
denoiseStrength = 0.8
steps = 25
scale = 7.5

pipe = OnnxStableDiffusionInpaintPipeline.from_pretrained("./onnx", provider="DmlExecutionProvider")
image = pipe(prompt, image=init_image, mask_image=mask_image,
                         strength=denoiseStrength, num_inference_steps=steps,
                         guidance_scale=scale).images[0]
image.save("inp.png")

Logs

2022-10-19 21:49:48.9222990 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-10-19 21:49:53.8425385 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-10-19 21:49:54.7589366 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-10-19 21:49:56.2920566 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-10-19 21:49:57.3330294 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
  0%|                                                                                                                                                                 | 0/26 [00:00<?, ?it/s]2022-10-19 21:50:01.5112469 [E:onnxruntime:, sequential_executor.cc:369 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Conv node. Name:'Conv_168' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1866)\onnxruntime_pybind11_state.pyd!00007FFBF0CDA4CA: (caller: 00007FFBF0CDBACF) Exception(3) tid(4a5c) 80070057 The parameter is incorrect.

  0%|                                                                                                                                                                 | 0/26 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "E:\PythonInOffice\amd_sd_img2img\inp.py", line 22, in <module>
    image = pipe(prompt, image=init_image, mask_image=mask_image,
  File "E:\PythonInOffice\amd_sd_img2img\diffuers_venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "E:\PythonInOffice\amd_sd_img2img\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion_inpaint.py", line 352, in __call__
    noise_pred = self.unet(
  File "E:\PythonInOffice\amd_sd_img2img\diffusers\src\diffusers\onnx_utils.py", line 46, in __call__
    return self.model.run(None, inputs)
  File "E:\PythonInOffice\amd_sd_img2img\diffuers_venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
    return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Conv node. Name:'Conv_168' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1866)\onnxruntime_pybind11_state.pyd!00007FFBF0CDA4CA: (caller: 00007FFBF0CDBACF) Exception(3) tid(4a5c) 80070057 The parameter is incorrect.

System Info

diffuser version: 2a0c823527694058d410ed6f91b52e7dd9f94ebe

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
anton-lcommented, Oct 27, 2022

@pythoninoffice in the current release only runwayml/stable-diffusion-inpainting is compatible with onnx inpainting, since it needs a finetuned model. As for models that are not finetuned for inpainting, both runwayml/stable-diffusion-v1-5 and the CompVis/stable-diffusion-v1-N checkpoints are compatible with the pytorch pipeline StableDiffusionInpaintPipelineLegacy, but it doesn’t have an onnx counterpart.

0reactions
pythoninofficecommented, Nov 7, 2022

@anton-l Thanks for the answer and apologies for my slow response! A quick follow-up question - is there any way to convert an existing model into a finetuned one, so that we can use it for OnnxStableDiffusionInpaintPipeline? Thanks!

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to convert bestresult to onnx - dnn - OpenCV Forum
Lama Is a great inpaint model, open source code At present, it's bestressult is in CKPT format, and I hope to convert it...
Read more >
Image Inpainting Python Demo - OpenVINO™ Documentation
This demo showcases Image Inpainting with GMCNN. The task is to estimate suitable pixel information to fill holes in images. example. How It...
Read more >
What's new in Diffusers? - Hugging Face
This blog post gives a high-level overview of the new features in diffusers version 0.3! Remember to give a ⭐ to the GitHub...
Read more >
How to fix Tensor shape error for Tensorflow JS? - Reddit
Currently, I'm trying to use an Onnx model type and to make it work, I converted from Onnx -> Tensorflow Saved Model with...
Read more >
MMEditing documentation
It supports various tasks, including: • Image super-resolution. • Video super-resolution. • Video frame interpolation. • Image inpainting. • ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found