question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How is running locally requiring your tokenized access to ckpt file ? Is that really local ? How is that local ?

See original GitHub issue

Describe the bug

this crap https://discuss.huggingface.co/t/how-to-login-to-huggingface-hub-with-access-token/22498/5

So… just let us use our own path to our folders and ckpt file ok ? Like a local thing you know ?

Reproduction

try it on miniconda, good luck !!!

Logs

none

System Info

win10 miniconda ldm env

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:1
  • Comments:12 (5 by maintainers)

github_iconTop GitHub Comments

5reactions
patrickvonplatencommented, Sep 17, 2022

Hey @ExponentialML,

Thanks a lot for the feedback here - I think we haven’t done a great job at showing how to easily download this model and run it locally. It’s literally as easy as doing:

git lfs install
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4

Followed by:

generator = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-4")

-> no need for an authentication token or cache or whatsoever. It’s also explained here: https://huggingface.co/docs/diffusers/quicktour

Given that you’re not the first to mention this problem, I’ll open an issue now about providing better documentation.

3reactions
Inkorakcommented, Sep 9, 2022

You’re can run locally.

pipe = StableDiffusionPipeline.from_pretrained('path to weights on your computer')
pipe = pipe.to("cuda")
Read more comments on GitHub >

github_iconTop Results From Across the Web

transformers and BERT downloading to your local machine
I went to the link and manually downloaded all files to a folder and specified path of that folder in my code. Tokenizer...
Read more >
How to load the pre-trained BERT model from local/colab ...
You are using the Transformers library from HuggingFace. ... You need to download a converted checkpoint, from there.
Read more >
Use tokenizers from Tokenizers - Hugging Face
We now have a tokenizer trained on the files we defined. We can either continue using it in that runtime, or save it...
Read more >
Save and load models | TensorFlow Core
This means a model can resume where it left off and avoid long training times. Saving also means you can share your model...
Read more >
Understand BLOOM, the Largest Open-Access AI, and Run It ...
A BLOOM checkpoint takes 330 GB of disk space, so it… ... Understand BLOOM, the Largest Open-Access AI, and Run It on Your...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found