Inference API: Can't load tokenizer using from_pretrained, please update its configuration: No such file or directory (os error 2)
See original GitHub issueEnvironment info
transformers
version:- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Who can help
Information
I am trying to use the Inference API in the HuggingFace Hub with a version of GPT-2 I finetuned on a custom task.
To reproduce
When I try to use the api, the following error comes
Steps to reproduce the behavior:
Here is the files I have in my private repo:
Expected behavior
I uploaded the tokenizer files to colab, and I was able to instantiate a tokenizer with the from_pretrained method, so I don’t know why the inference api throws an error.
Issue Analytics
- State:
- Created 2 years ago
- Comments:20 (9 by maintainers)
Top Results From Across the Web
Tokenizer issue in Huggingface Inference on uploaded models
While inferencing on the uploaded model in huggingface, I am getting the below error, Can't load tokenizer using from_pretrained, please ...
Read more >Huggingface AutoTokenizer can't load from local path
I have problem loading the tokenizer. I think the problem is with AutoTokenizer.from_pretrained('local/path/to/directory'). Code: from ...
Read more >huggingface load local model - You.com | The AI Search ...
I am getting "OSError: Can't load tokenizer for '/home/ramil/wav2vec2-large-xlsr-turkish-demo/checkpoint-11400'. If you were trying to load it from ...
Read more >Hugging Face Pre-trained Models: Find the Best One for Your ...
There are two ways to start working with the Hugging Face NLP library: either using pipeline or any available pre-trained model by repurposing...
Read more >cortexlabs/cortex - Gitter
So, the ONNX Predictor of Cortex doesn't appear to be 50% slower - in my tests, it's the same. There's only one thing...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
tokenizer_config.json
is necessary for some additional information in the tokenizer. Originalgpt2
repo might be different, but there’s some code for legacy models to make sure everything works smoothly for those.The path within that file is indeed something to look into but it should work nonetheless.
Hi,
Having the same issue, Steps followed :
Behavior :