OSError: file bert-base-uncased/config.json not found
See original GitHub issueEnvironment info
transformers
version: 4.4.2- Python version: 3.6
- PyTorch version (GPU?): 1.8.0 (Tesla V100)
Information
The problem arises when using:
from transformers import BertModel
model = BertModel.from_pretrained('bert-base-uncased')
Error Info (Some personal info has been replaced by —)
file bert-base-uncased/config.json not found
Traceback (most recent call last):
File "---/anaconda3/envs/attn/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/configuration_utils.py", line 420, in get_config_dict
File "---/anaconda3/envs/attn/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/file_utils.py", line 1063, in cached_path
OSError: file bert-base-uncased/config.json not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "---.py", line 107, in <module>
from_pretrained_input()
File "---.py", line 96, in from_pretrained_input
model = BertModel.from_pretrained('bert-base-uncased')
File "---/anaconda3/envs/attn/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/modeling_utils.py", line 962, in from_pretrained
File "---/anaconda3/envs/attn/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/configuration_utils.py", line 372, in from_pretrained
File "---/anaconda3/envs/attn/lib/python3.6/site-packages/transformers-4.2.2-py3.8.egg/transformers/configuration_utils.py", line 432, in get_config_dict
OSError: Can't load config for 'bert-base-uncased'. Make sure that:
- 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert-base-uncased' is the correct path to a directory containing a config.json file
what I have read:
https://github.com/huggingface/transformers/issues/353
what I have tried:
- loading from a downloaded model file works well
wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz
unzip the file and rename bert_config.json
as config.json
, then
model = BertModel.from_pretrained(BERT_BASE_UNCASED_CACHE)
-
enough disk space, enough memory, free GPU
-
open internet connection, no proxy
import pytorch_pretrained_bert as ppb
assert 'bert-large-cased' in ppb.modeling.PRETRAINED_MODEL_ARCHIVE_MAP
- The following models work well
model = BertModel.from_pretrained('bert-base-cased')
model = RobertaModel.from_pretrained('roberta-base')
- working well in server cmd but not in local pycharm (remote deployment to server)
Observation:
- Pycharm can found the
transfromers
installed with pip, but that will trigger this problem - Pycharm cannot find the current
transformers
installed with condaconda install transformers=4.4 -n env -c huggingface
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:11 (1 by maintainers)
Top Results From Across the Web
It looks like the config file at 'bert-base-uncased' is not a ...
Working fine for months, then I interrupted a "bert-large-cased" download and the following code returns the error in the title:
Read more >Configuration
The base class PretrainedConfig implements the common methods for loading/saving a configuration either from a local file or directory, or from a pretrained ......
Read more >Can't load bert German model from huggingface
Hi Rasa community, I'm using rasa to build a bot in German language and want to try out BERT in LanguageModelFeaturizer.
Read more >OSError: It looks like the config file at 'roberta-base ...
I'd firstly suggest using relative references instead of passing a GitHub URL as a file path. URLs are not valid file paths. Before....
Read more >bert model save_pretrained - You.com | The AI Search ...
__init__() config = BertConfig.from_pretrained('bert-base-uncased', ... download the model (in your case TensorFlow model .h5 and the config.json file), ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi, I’ve had the same error but with
roberta-base
. It appeared that I had an empty folder namedroberta-base
in my working directory. Removing it solved the issue.I found this issue is caused by setting cache directory using checkpoint name TrainingArguments(checkpoint,evaluation_strategy=‘steps’)
change checkpoint to something else resolve the issue