question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItΒ collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Stable Diffusion 2 and Apple Silicon/MPS

See original GitHub issue

Describe the bug

The diffusers code will run under Apple Silicon/MPS for Stable Diffusion 2.0 but the result is always noise. No images are generated.

Additionally, if you specify pipe.enable_attention_slicing(), then you get the following error β€” this is with the latest, dev source from the site and is not using the stable release.

β”‚ /Users/fahim/Code/Python/sd2/diff.py:11 in <module>                                              β”‚
β”‚                                                                                                  β”‚
β”‚    8 device = torch.device("cuda" if torch.cuda.is_available() else "mps" if torch.has_mps el    β”‚
β”‚    9 scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")         β”‚
β”‚   10 pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, safety_che    β”‚
β”‚ ❱ 11 pipe.enable_attention_slicing()                                                             β”‚
β”‚   12 # pipe = StableDiffusionPipeline.from_pretrained(model_id).to(device)                       β”‚
β”‚   13                                                                                             β”‚
β”‚   14 prompt = "a photo of an astronaut riding a tricerotops"                                     β”‚
β”‚                                                                                                  β”‚
β”‚ /Users/fahim/miniconda3/envs/ml/lib/python3.9/site-packages/diffusers/pipelines/stable_diffusion β”‚
β”‚ /pipeline_stable_diffusion.py:170 in enable_attention_slicing                                    β”‚
β”‚                                                                                                  β”‚
β”‚   167 β”‚   β”‚   if slice_size == "auto":                                                           β”‚
β”‚   168 β”‚   β”‚   β”‚   # half the attention head size is usually a good trade-off between             β”‚
β”‚   169 β”‚   β”‚   β”‚   # speed and memory                                                             β”‚
β”‚ ❱ 170 β”‚   β”‚   β”‚   slice_size = self.unet.config.attention_head_dim // 2                          β”‚
β”‚   171 β”‚   β”‚   self.unet.set_attention_slice(slice_size)                                          β”‚
β”‚   172 β”‚                                                                                          β”‚
β”‚   173 β”‚   def disable_attention_slicing(self):                                                   β”‚
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: unsupported operand type(s) for //: 'list' and 'int'

Reproduction

This is the script used:

import torch
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler

model_id = "stabilityai/stable-diffusion-2"

# Use the Euler scheduler here instead
device = torch.device("cuda" if torch.cuda.is_available() else "mps" if torch.has_mps else "cpu")
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, safety_checker=None).to(device)

prompt = "a photo of an astronaut riding a tricerotops"
image = pipe(prompt, height=512, width=512, num_inference_steps=25).images[0]

image.save("test.png")

Logs

No response

System Info

  • diffusers version: 0.9.0.dev0
  • Platform: macOS-13.0-arm64-arm-64bit
  • Python version: 3.9.13
  • PyTorch version (GPU?): 1.14.0.dev20221124 (False)
  • Huggingface_hub version: 0.10.1
  • Transformers version: 4.24.0
  • Using GPU in script?: yes. See script
  • Using distributed or parallel set-up in script?: no. See script

Issue Analytics

  • State:closed
  • Created 10 months ago
  • Comments:11 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
lkauppcommented, Nov 30, 2022

any idea when this fix is also available in the inpainting repository? Edit:// The other repo is a bunch of fixes away; everything beyond 512x512 px is not doable with my RTX 3090 24 GB memory… xformers and slicing (manually c&p from this fix pasted) working.

1reaction
FahimFcommented, Nov 26, 2022

After testing today, I can confirm that this works on my Apple M1 πŸ™‚ Thank you!

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to use Stable Diffusion in Apple Silicon (M1/M2)
Diffusers is compatible with Apple silicon for Stable Diffusion inference, using the PyTorch mps device. These are the steps you need to follow...
Read more >
Stable Diffusion with Core ML on Apple Silicon
Today, we are excited to release optimizations to Core ML for Stable Diffusion in macOS 13.1 and iOS 16.2, along with code to...
Read more >
Pedro Cuenca on Twitter: "You can now run Stable Diffusion ...
You can now run Stable Diffusion on your Apple Silicon Mac (M1 or M2)! Simply use .to("mps") (instead of "cuda") to run the...
Read more >
Run Stable Diffusion on your M1 Mac's GPU - Replicate
Clone the repository and install the dependencies. Run this to clone the fork of Stable Diffusion: git clone -b apple-silicon-mps-support httpsΒ ...
Read more >
Running Stable Diffusion on M1/M2 Mac
As we shared in the posts earlier Stable Diffusion is quite the ... A Mac with M1 or M2 chip. ... git clone...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found