How to use SD1.5 on M1 Mac? (CUDA not available Error)
See original GitHub issueAt the end of the demo video you can see the usage of SD1.5.
When I try to start Lama with --model=SD1.5 I get the following errors regarding missing CUDA.
/Users/USER/miniconda3/lib/python3.9/site-packages/torch/amp/autocast_mode.py:198: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling') /Users/USER/miniconda3/lib/python3.9/site-packages/torch/amp/autocast_mode.py:198: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling') usage: lama-cleaner [-h] [--host HOST] [--port PORT] [--model {lama,ldm,zits,mat,fcf,sd1.5,cv2}] [--hf_access_token HF_ACCESS_TOKEN] [--sd-disable-nsfw] [--sd-cpu-textencoder] [--sd-run-local] [--device {cuda,cpu}] [--gui] [--gui-size GUI_SIZE GUI_SIZE] [--input INPUT] [--debug] lama-cleaner: error: argument --model: invalid choice: 'SD1.5' (choose from 'lama', 'ldm', 'zits', 'mat', 'fcf', 'sd1.5', 'cv2')
What can I do to get it working? I already downloaded both checkpoint versions 1.5 pruned and 1.5pruned emaonly from hugging face. Can someone explain to me the next steps?
Issue Analytics
- State:
- Created a year ago
- Comments:8 (4 by maintainers)
Install the virtualenvironment according to the guidelines in the blog, then you can install Lama Cleaner (pip3 install lama-cleaner) after
Activate the virtualenv
I made a typo, it should be
--sd-run-local
. This parameter simply allows you to not pass the huggingface token, nothing to do with running speed.You can ignore this CUDA warning, it is normal for this warning to appear on machines without NVIDIA graphics.
The real reason the model running slow is, we should use PyTorch compiled for the M1 chip, currently Lama Cleaner not support that, I will add M1 support If I could get an M1 Mac.