Data type mismatch when using stable diffusion in fp16
See original GitHub issueDescribe the bug
When run following code to try stable diffusion v1.5,
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
"local_project_path/stable-diffusion-v1-5",
torch_dtype=torch.float16, revision="fp16"
)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
I got the error
File "{conda_env_path}/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 260, in forward
attn_output = torch.bmm(attn_probs, value_states)
RuntimeError: expected scalar type Half but found Float
Reproduction
No response
Logs
No response
System Info
diffusers
version: 0.6.0- Platform: Linux-4.4.0-31-generic-x86_64-with-glibc2.27
- Python version: 3.9.11
- PyTorch version (GPU?): 1.12.0 (True)
- Huggingface_hub version: 0.10.1
- Transformers version: 4.20.1
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:10 (6 by maintainers)
Top Results From Across the Web
Memory and speed - Hugging Face
We present some techniques and ideas to optimize Diffusers inference for memory or speed. As a general rule, we recommend the use of...
Read more >Issues compiling AITemplate for Stable Diffusion v2 #103
I tried just changing the huggingface model name and upgrading diffusers to main ( pip install --upgrade git+https://github.com/huggingface/ ...
Read more >Help & Questions Megathread! : r/StableDiffusion - Reddit
I am trying to use Stable Diffusion (Automatic1111) on Google Colab made by TheLastBen as shown above. I am having trouble with installation....
Read more >Train With Mixed Precision - NVIDIA Documentation Center
Porting the model to use the FP16 data type where appropriate. Adding loss scaling to preserve small gradient values. The ability to train...
Read more >Automatic Mixed Precision package - torch.amp - PyTorch
float32 ( float ) datatype and other operations use lower precision floating point datatype ( lower_precision_fp ): torch.float16 ( half ) or torch.bfloat16...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I cannot reproduce the error - the code snippet runs fine for me. My version is:
@ParadoxZW could you maybe try to upgrade
transformers
to the newest version - the problem seems to come fromtransformers
here actually and notdiffusers
:Also @dblunk88 is 100% right, we do not recommend using
autocast
anymore - instead one should use “pure” FP16 as is done in the code example above.Same thing happens from time to time: https://discuss.huggingface.co/t/error-expected-scalar-type-half-but-found-float/25685/3?u=pcuenq. @patrickvonplaten perhaps we’d need to mention it in the docs, I’ll open a PR later unless somebody else wants to do it 😃