pipeline does not load a (local) model
See original GitHub issueHello the great huggingface
team!
I am using a computer behind a firewall so I cannot download files from python. I am simply trying to load a sentiment-analysis pipeline so I downloaded all the files available here https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/tree/main in a local folder (I am using tensorflow)
- config.json
- tf_model.h5
- tokenizer_config.json
- vocab.txt
However, when I tried to use this path in a pipeline
, I get a strange error:
from transformers import pipeline
classifier = pipeline(task= 'sentiment-analysis',
model= "C:\\Users\\me\\mymodel",
tokenizer = "C:\\Users\\me\\mymodel")
ValueError: unable to parse C:\Users\me\mymodel\modelcard.json as a URL or as a local path
Is this a bug? Thanks!
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:6 (3 by maintainers)
Top Results From Across the Web
using pipelines with a local model - python - Stack Overflow
load the model on a computer with internet access; save the model with save_pretrained() ... The folder will contain all the expected files....
Read more >Pipelines - Hugging Face
The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the complex...
Read more >MLflow Models — MLflow 2.0.1 documentation
sklearn library allows loading models back as a scikit-learn Pipeline object for use in code that is aware of scikit-learn, or as a...
Read more >torch.hub — PyTorch 1.13 documentation
The published models should be at least in a branch/tag. It can't be a random commit. Loading models from Hub. Pytorch Hub provides...
Read more >Log, load, register, and deploy MLflow models
defaults channel from the conda environment for the model, that model may have a dependency on the ; defaults channel that you may...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
It depends on whether you want to use the pipeline, or the model right away. Both should work with the files stored locally.
Yes you can download them directly from the web. On the model page, there’s a button “Use in Transformers” on the right. This shows how you either load the weights from the hub into your RAM using
.from_pretrained()
, or by git cloning the files using git-lfs.