question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Diffusers 0.7.0 - Torch Accelerator - "import OnnxStableDiffusionPipeline" results in Traceback Error (DmlExecutionProvider)

See original GitHub issue

Intro

Diffusers provides a Stable Diffusion pipeline compatible with the ONNX Runtime. This allows you to run Stable Diffusion on any hardware that supports ONNX (including CPUs), and where an accelerated version of PyTorch is not available.

Describe the bug

Calling “from diffusers import OnnxStableDiffusionPipeline” results in Traceback Error when using diffusers 0.7.0 Diffusers 0.7.0 now requires the accelerate library be installed. Accelerate breaks onnxruntime-directml (Windows)

Removing accelerate and installing diffusers==0.6.0 fixes the issue.

Please remove the requirement to install accelerate and only use it if its appropriate for the hardware being used.

Reproduction

pip install virtualenv
python -m venv sd_env
sd_env\scripts\activate
pip install diffusers
pip install transformers
pip install onnxruntime
pip install onnx
pip install torch
pip install onnxruntime-directml --force-reinstall

run sample code:

from diffusers import OnnxStableDiffusionPipeline
height=512
width=512
num_inference_steps=50
guidance_scale=7.5
eta=0.0
prompt = "a photo of an astronaut riding a horse on mars"
negative_prompt="bad hands, blurry"
pipe = OnnxStableDiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", revision="onnx", provider="DmlExecutionProvider", device_map="auto")
image = pipe(prompt, height, width, num_inference_steps, guidance_scale, negative_prompt, eta).images[0] 
image.save("astronaut_rides_horse.png")

Logs

Traceback (most recent call last):
  File "D:\ai\1.py", line 1, in <module>
    from diffusers import OnnxStableDiffusionPipeline
  File "D:\ai\sd_env\lib\site-packages\diffusers\__init__.py", line 24, in <module>
    raise ImportError(error_msg)
ImportError: Please install the `accelerate` library to use Diffusers with PyTorch. You can do so by running `pip install diffusers[torch]`. Or if torch is already installed, you can run `pip install accelerate`.

After installing accelerate:

NOTE: Redirects are currently not supported in Windows or MacOs.
Traceback (most recent call last):
  File "D:\ai\1.py", line 1, in <module>
    from diffusers import OnnxStableDiffusionPipeline
  File "D:\ai\sd_env\lib\site-packages\diffusers\__init__.py", line 28, in <module>
    from .modeling_utils import ModelMixin
  File "D:\ai\sd_env\lib\site-packages\diffusers\modeling_utils.py", line 24, in <module>
    import accelerate
  File "D:\ai\sd_env\lib\site-packages\accelerate\__init__.py", line 7, in <module>
    from .accelerator import Accelerator
  File "D:\ai\sd_env\lib\site-packages\accelerate\accelerator.py", line 27, in <module>
    from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
  File "D:\ai\sd_env\lib\site-packages\accelerate\checkpointing.py", line 24, in <module>
    from .utils import (
  File "D:\ai\sd_env\lib\site-packages\accelerate\utils\__init__.py", line 96, in <module>
    from .launch import PrepareForLaunch, _filter_args, get_launch_prefix
  File "D:\ai\sd_env\lib\site-packages\accelerate\utils\launch.py", line 25, in <module>
    import torch.distributed.run as distrib_run
  File "D:\ai\sd_env\lib\site-packages\torch\distributed\run.py", line 386, in <module>
    from torch.distributed.launcher.api import LaunchConfig, elastic_launch
  File "D:\ai\sd_env\lib\site-packages\torch\distributed\launcher\__init__.py", line 10, in <module>
    from torch.distributed.launcher.api import (  # noqa: F401
  File "D:\ai\sd_env\lib\site-packages\torch\distributed\launcher\api.py", line 15, in <module>
    from torch.distributed.elastic.agent.server.api import WorkerSpec
  File "D:\ai\sd_env\lib\site-packages\torch\distributed\elastic\agent\server\__init__.py", line 40, in <module>
    from .local_elastic_agent import TORCHELASTIC_ENABLE_FILE_TIMER, TORCHELASTIC_TIMER_FILE
  File "D:\ai\sd_env\lib\site-packages\torch\distributed\elastic\agent\server\local_elastic_agent.py", line 19, in <module>
    import torch.distributed.elastic.timer as timer
  File "D:\ai\sd_env\lib\site-packages\torch\distributed\elastic\timer\__init__.py", line 44, in <module>
    from .file_based_local_timer import FileTimerClient, FileTimerServer, FileTimerRequest  # noqa: F401
  File "D:\ai\sd_env\lib\site-packages\torch\distributed\elastic\timer\file_based_local_timer.py", line 63, in <module>
    class FileTimerClient(TimerClient):
  File "D:\ai\sd_env\lib\site-packages\torch\distributed\elastic\timer\file_based_local_timer.py", line 81, in FileTimerClient
    def __init__(self, file_path: str, signal=signal.SIGKILL) -> None:
AttributeError: module 'signal' has no attribute 'SIGKILL'. Did you mean: 'SIGILL'?

System Info

Windows 11 Python 3.10.x Diffusers 0.7.0 Transformers 4.24.0 Torch 1.13.0 Onnxruntime 1.13.1 Onnxruntime-directml 1.13.1

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:10 (6 by maintainers)

github_iconTop GitHub Comments

2reactions
anton-lcommented, Nov 7, 2022

Ah, sorry for missing that, thanks @sgugger!

Linking the issue to track: https://github.com/pytorch/pytorch/issues/85427

2reactions
averadcommented, Nov 4, 2022

@teddybee you need to remove the accelerate package

pip uninstall accelerate

Read more comments on GitHub >

github_iconTop Results From Across the Web

Utilize Apple M1 chip causes error (kernel death) #13
However, executing the code below leads to kernel death. import torch from torch import autocast from diffusers import StableDiffusionPipeline ...
Read more >
Undefined symbol pytorch 1.7.0 and above
I was working with PyTorch 1.5.0 for quite some time and decided to update to 1.9.0. System: Ubuntu 18.04 Kernel: 4.15.0-147-generic ...
Read more >
Error while using accelerator = 'ddp' - PyTorch Lightning
My code works perfectly fine with distributed_backend='dp', but fails when I use distributed_backend='ddp' with the following error:
Read more >
Fastai v0.7 install issues thread - Part 1 (2018)
ERROR : fastai 0.7.0 has requirement torch<0.4, but you'll have torch 1.1.0 which is incompatible. Anyone successfully running 0.7.0 in Colab now ...
Read more >
How to solve the error for import torch on macOs?
Check your interpreter (The python version you are using at the moment). To check the Python version in your Jupyter notebook, first import...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found