Help: cannot load pretrain models from .pytorch_pretrained_bert folder
See original GitHub issueI need to run the package on a machine without internet. Copied over the “.pytorch_pretrained_bert” folder from one machine to another.
Installed anaconda3 and tried to run tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103').
Got this error:
Model name 'transfo-xl-wt103' was not found in model name list (transfo-xl-wt103). We assumed 'transfo-xl-wt103' was a path or url but couldn't find files https://s3.amazonaws.com/models.huggingface.co/bert/transfo-xl-wt103-vocab.bin at this path or url.
Do I need to copy anything else to the second machine to make it load from the cache folder?
Ubuntu 16.04, pytorch 1.0
Issue Analytics
- State:
- Created 4 years ago
- Comments:13 (6 by maintainers)
Top Results From Across the Web
Loading Google AI or OpenAI pre-trained weights or PyTorch ...
To load one of Google AI's, OpenAI's pre-trained models or a PyTorch saved ... download (the cache folder can be found at ~/.pytorch_pretrained_bert/...
Read more >How to load the pre-trained BERT model from local/colab ...
You can import the pre-trained bert model by using the below lines of code: pip install pytorch_pretrained_bert from pytorch_pretrained_bert ...
Read more >pytorch-pretrained-bert - PyPI
This PyTorch implementation of BERT is provided with Google's pre-trained models, examples, notebooks and a command-line interface to load any pre-trained ...
Read more >I've downloaded bert pretrained model 'bert-base-cased'. I'm ...
I'm unable to load the model with help of BertTokenizer. I'm trying for bert tokenizer. In the bert-pretrained-model folder I have ...
Read more >load pretrained language model transformer - You.com
I am using this code to generate sentence embeddings with the hugging face transformer library, and I am getting this error. I can't...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

I will add a section in the readme detailing how to load a model from drive. Basically, you can just download the models and vocabulary from our S3 following the links at the top of each file (
modeling_transfo_xl.pyandtokenization_transfo_xl.pyfor Transformer-XL) and put them in one directory with the filename also indicated at the top of each file.Here is the process in your case:
Now just load the model and tokenizer by pointing to this directory:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.