question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RobertaTokenizerFast does not add special tokens

See original GitHub issue

I’m not sure whether this should be a part of tokenizers or transformers, because it uses both. Classes that don’t work are from transformers so I’m posting it here.

Environment info

  • transformers version: 4.3.3
  • Platform: Colab
  • PyTorch version (GPU?): n/a
  • Tensorflow version (GPU?): n/a
  • Using GPU in script?: no
  • Using distributed or parallel set-up in script?: no

Who can help

Information

Reproduction code

https://colab.research.google.com/drive/1iYLBLzXRkQpdPyVlIdi_qNCzfbD1uwGs?usp=sharing

When loading tokenizer trained using tokenizers from transformers, e.g.

tfast = RobertaTokenizerFast.from_pretrained("./workdir/tokenizer", model_max_length=10)

it does not add special tokens

tfast("asd", add_special_tokens=True)
{'input_ids': [400, 72], 'attention_mask': [1, 1]}

“Slow” version behaves correctly:

tslow = RobertaTokenizer.from_pretrained("./workdir/tokenizer", model_max_length=10)
tslow("asd", add_special_tokens=True)
{'input_ids': [0, 400, 72, 2], 'attention_mask': [1, 1, 1, 1]}

Expected behavior

Both tokenizers produce the same output.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
LysandreJikcommented, Feb 26, 2021

Yes, I believe that is so. Tokenizers created with tokenizers need to have their post-processors/pre-tokenizers and other components defined to work correctly, otherwise it yields unexpected results as we have just seen!

0reactions
marrrcincommented, Mar 1, 2021

Closing, but still seems odd that the behaviour for exact same files is different between those tokenizers…

Read more comments on GitHub >

github_iconTop Results From Across the Web

RoBERTa
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the...
Read more >
Can not add large custom vocab to RobertaTokenizerFast
I have around 65K tokens in my custom vocab file and I want to include in the DistilRoberta model to finetune it on...
Read more >
Adding new tokens to BERT/RoBERTa while retaining ...
Is there a way for me to add the new tokens while getting the behavior of the surrounding tokens to match what it...
Read more >
Create a Tokenizer and Train a Huggingface RoBERTa ...
We can describe our training phase in three main steps: Create and train a byte-level, Byte-pair encoding tokenizer with the same special tokens...
Read more >
Adding a new token to a transformer model without ...
You can add the tokens as special tokens, similar to [SEP] or [CLS] using the add_special_tokens method. There will be separated during ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found