PreTrainedTokenizerBase issue produced by PR #19073
See original GitHub issueSystem Info
transformers
version: 4.23.1- Platform: Linux-5.4.0-125-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
Who can help?
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examples
folder (such as GLUE/SQuAD, …) - My own task or dataset (give details below)
Reproduction
Reproduction
Have a local tokenizer.json
file (different than Hub’s file and in the same folder as invoked code) and invoke the following code:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
print(tokenizer)
Error
Depends on how the local tokenizer.json
is filled with. It could be an error stating that both tokenizers are distinct in the number of tokens, etc. Nevertheless, here is the trace of what could be the issue:
transformers.tokenization_utils_base (Line 1726)
for file_id, file_path in vocab_files.items():
if file_path is None:
resolved_vocab_files[file_id] = None
elif os.path.isfile(file_path):
resolved_vocab_files[file_id] = file_path
...
If we print the vocab_files
dictionary, most of the time its output will be as expected:
{'vocab_file': 'vocab.json', 'merges_file': 'merges.txt', 'tokenizer_file': 'tokenizer.json', 'added_tokens_file': 'added_tokens.json', 'special_tokens_map_file': 'special_tokens_map.json', 'tokenizer_config_file': 'tokenizer_config.json'}
With the added lines in PR #19073, there will be a time when the code will check if tokenizer.json
is a file that exists in the system, and if it does, it will mark it as the file_path
for the resolved_vocab_files
dictionary. Unfortunately, this is not expected, because we need the file_path
to come from the Hub’s download (since we are loading a pre-trained tokenizer from a identifier that is found on Hub) and not from a local file.
If we print the resolved_vocab_files
dictionary with the added lines from PR #19073, this is its output:
{... 'tokenizer_file': 'tokenizer.json' ...}
Without the added lines:
{... 'tokenizer_file': '/home/gderosa/.cache/huggingface/hub/models--Salesforce--codegen-350M-mono/snapshots/40b7a3b6e99e73bdb497a14b740e7167b3413c74/tokenizer.json' ...}
My assumption is that this very same behavior should occur if users have any local files defined by the vocab_files
dictionary in the same folder as they are running their scripts.
Solutions
Maybe the cached_file
loading should become prior to the added lines? And if the cached version could not be found, it resorts to local files?
Expected behavior
Expected behavior is to use the tokenizer_file
from the pretrained_model_name_or_path
instead of the local file.
Issue Analytics
- State:
- Created a year ago
- Comments:5 (5 by maintainers)
Top GitHub Comments
Got time today actually, this should be fixed by the PR linked above!
Thanks for the report. I understand the bug and your analysis seems correct for its cause. Will work on a fix as soon I have some free time (might be early next week only)!