question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

single_word option of new tokens are disabled by save_pretrained when we save and reload a tokenizer twice

See original GitHub issue

Environment info

I set up environment as follows:

conda create -n test python=3.9
conda activate test

pip install transformers
# I got transformers 4.10.2 and tokenizers 0.10.3

Other details:

  • transformers version: 4.10.2
  • Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
  • Python version: 3.9.6
  • PyTorch version (GPU?): not installed (NA)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: No
  • Using distributed or parallel set-up in script?: No

Who can help

Information

single_word option of new tokens are disabled by save_pretrained when we save and reload a tokenizer twice.

To reproduce

from transformers import AutoTokenizer
from tokenizers import AddedToken

# Load tokenizer and add tokens.
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
new_vocab = [AddedToken("some_word", single_word=True), AddedToken("some_words", single_word=True)]
tokenizer.add_tokens(new_vocab)

def check_tokenizer():
    print(tokenizer.convert_ids_to_tokens(tokenizer.encode("some_words", add_special_tokens=False)))

check_tokenizer()

# Save and reload tokenizer
tokenizer.save_pretrained("first_save")

tokenizer = AutoTokenizer.from_pretrained("./first_save")

check_tokenizer()

# Save and reload tokenizer again
tokenizer.save_pretrained("second_save")

tokenizer = AutoTokenizer.from_pretrained("./second_save")

check_tokenizer()

The above code outputs:

['some_words']
['some_words']
['some_word', 's']

first_save/tokenizer.json includes the following entry:

{"id":28996,"special":false,"content":"some_word","single_word":true,"lstrip":false,"rstrip":false,"normalized":true},{"id":28997,"special":false,"content":"some_words","single_word":true,"lstrip":false,"rstrip":false,"normalized":true}

However, in second_save/tokenizer.json, the above entry is changed. Note that a value for “single_word” is changed from true to false.

{"id":28996,"special":false,"content":"some_word","single_word":false,"lstrip":false,"rstrip":false,"normalized":true},{"id":28997,"special":false,"content":"some_words","single_word":false,"lstrip":false,"rstrip":false,"normalized":true}

Expected behavior

['some_words']
['some_words']
['some_words']

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
SaulLucommented, Nov 9, 2021

It is indeed interesting!

And thank you very much for all the analysis and the fix @qqaatw! I’m putting it on my todo list to try to find something that would avoid this undesirable behavior.

0reactions
github-actions[bot]commented, Oct 15, 2021

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Training a new tokenizer from an old one - Hugging Face
There's a very simple API in Transformers that you can use to train a new tokenizer with the same characteristics as an existing...
Read more >
Huggingface saving tokenizer - Stack Overflow
To save the entire tokenizer, you should use save_pretrained() ... BASE_MODEL = "distilbert-base-multilingual-cased" tokenizer ...
Read more >
Alignment of NER tokens when creating suggestions using ...
I am using the ber.ner.manual recipe. ... tokenizer "tokenizer_id": tid, # Don't allow selecting spacial SEP/CLS tokens "disabled": text in ...
Read more >
huggingface save pretrained | The Search Engine You Control
I have a custom BERT-like model (with modified attention) that I pretrained with PyTorch. Data preparation was done with a Huggingface tokenizer. Now...
Read more >
Tokenized Sending - Salesforce Help
You should have basic knowledge about token and security concepts. This feature requires a new Marketing Cloud account to ensure all sensitive data...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found