question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Data type mismatch when using stable diffusion in fp16

See original GitHub issue

Describe the bug

When run following code to try stable diffusion v1.5,

from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
       "local_project_path/stable-diffusion-v1-5", 
        torch_dtype=torch.float16, revision="fp16"
)
pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]  

I got the error

File "{conda_env_path}/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 260, in forward
    attn_output = torch.bmm(attn_probs, value_states)
RuntimeError: expected scalar type Half but found Float

Reproduction

No response

Logs

No response

System Info

  • diffusers version: 0.6.0
  • Platform: Linux-4.4.0-31-generic-x86_64-with-glibc2.27
  • Python version: 3.9.11
  • PyTorch version (GPU?): 1.12.0 (True)
  • Huggingface_hub version: 0.10.1
  • Transformers version: 4.20.1
  • Using GPU in script?: <fill in>
  • Using distributed or parallel set-up in script?: <fill in>

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:1
  • Comments:10 (6 by maintainers)

github_iconTop GitHub Comments

2reactions
patrickvonplatencommented, Nov 2, 2022

I cannot reproduce the error - the code snippet runs fine for me. My version is:

- `diffusers` version: 0.7.0.dev0
- Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Huggingface_hub version: 0.10.1
- Transformers version: 4.24.0.dev0
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>

@ParadoxZW could you maybe try to upgrade transformers to the newest version - the problem seems to come from transformers here actually and not diffusers:

pip install --upgrade transformers

Also @dblunk88 is 100% right, we do not recommend using autocast anymore - instead one should use “pure” FP16 as is done in the code example above.

1reaction
pcuencacommented, Nov 8, 2022

Same thing happens from time to time: https://discuss.huggingface.co/t/error-expected-scalar-type-half-but-found-float/25685/3?u=pcuenq. @patrickvonplaten perhaps we’d need to mention it in the docs, I’ll open a PR later unless somebody else wants to do it 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

Memory and speed - Hugging Face
We present some techniques and ideas to optimize Diffusers inference for memory or speed. As a general rule, we recommend the use of...
Read more >
Issues compiling AITemplate for Stable Diffusion v2 #103
I tried just changing the huggingface model name and upgrading diffusers to main ( pip install --upgrade git+https://github.com/huggingface/ ...
Read more >
Help & Questions Megathread! : r/StableDiffusion - Reddit
I am trying to use Stable Diffusion (Automatic1111) on Google Colab made by TheLastBen as shown above. I am having trouble with installation....
Read more >
Train With Mixed Precision - NVIDIA Documentation Center
Porting the model to use the FP16 data type where appropriate. Adding loss scaling to preserve small gradient values. The ability to train...
Read more >
Automatic Mixed Precision package - torch.amp - PyTorch
float32 ( float ) datatype and other operations use lower precision floating point datatype ( lower_precision_fp ): torch.float16 ( half ) or torch.bfloat16...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found