question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Deberta Tokenizer convert_ids_to_tokens() is not giving expected results

See original GitHub issue

Environment info

  • transformers version: 4.3.0
  • Platform: Colab
  • Python version: 3.9
  • PyTorch version (GPU?): No
  • Tensorflow version (GPU?): No
  • Using GPU in script?: No
  • Using distributed or parallel set-up in script?: No

Information

I am using Deberta Tokenizer. convert_ids_to_tokens() of the tokenizer is not working fine.

The problem arises when using:

  • my own modified scripts: (give details below)

The tasks I am working on is:

  • an official GLUE/SQUaD task: (give the name)
  • my own task or dataset

To reproduce

Steps to reproduce the behavior:

  1. Get Debrta Tokenizer
from transformers import DebertaTokenizer
deberta_tokenizer = DebertaTokenizer.from_pretrained('microsoft/deberta-base')
  1. Encode Some Example Using Tokenizer
example = "Hi I am Bhadresh. I found an issue in Deberta Tokenizer"
encoded_example = distilbert_tokenizer.encode(example)
  1. Convert Ids to tokens:
distilbert_tokenizer.convert_ids_to_tokens(encoded_example )
"""
Output: ['[CLS]', '17250', '314', '716', '16581', '324', '3447', '13', '314', '1043', '281', '2071', '287', '1024', '4835', '64', '29130', '7509', '[SEP]']
"""

Colab Link For Reproducing

Expected behavior

It should return some tokens like this

['[CLS]', 'hi', 'i', 'am', 'b', '##had', '##resh', '.', 'i', 'found', 'an', 'issue', 'in', 'de', '##bert', '##a', 'token', '##izer', '[SEP]']

Not just convert an integer to string like the current behavior

Tagging SMEs for help:

@n1t0, @LysandreJik

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:10 (10 by maintainers)

github_iconTop GitHub Comments

1reaction
cronoikcommented, Feb 24, 2021
1reaction
cronoikcommented, Feb 21, 2021

You can convert them back with the following code:

from transformers import DebertaTokenizer
t = DebertaTokenizer.from_pretrained('microsoft/deberta-base')
example = "Hi I am Bhadresh. I found an issue in Deberta Tokenizer"

encoded_example = t.encode(example)

[t.gpt2_tokenizer.decode([t.gpt2_tokenizer.sym(id)]) if t.gpt2_tokenizer.sym(id) not in t.all_special_tokens else t.gpt2_tokenizer.sym(id) for id in encoded_example]

Output:

['[CLS]',
 'Hi',
 ' I',
 ' am',
 ' Bh',
 'ad',
 'resh',
 '.',
 ' I',
 ' found',
 ' an',
 ' issue',
 ' in',
 ' De',
 'bert',
 'a',
 ' Token',
 'izer',
 '[SEP]']

After some digging into the code, I am actually not sure if I should create a patch for it or not. I think with a patch we can probably also remove the method download_asset and refactor the load_vocab method.

I am not sure if this was discussed before but when we create the required files from the bpe_encoder.bin, we could probably get rid of the GPT2Tokenizer class in tokenization_deberta.py and the DebertaTokenizer could inherit directly from the GPT2Tokenizer (like the RobertaTokenizer).

I will leave it to @LysandreJik and @BigBird01 to decide what to do with it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

DeBERTa - Hugging Face
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the...
Read more >
How to use the DeBERTa model by He et al. (2022) on Spyder?
When you call encode() method it would tokenize the input then encode it to the tensors a transformer model expects, then pass it...
Read more >
Simple Deberta V3 Large 1 epoch , 0.49 LB | Kaggle
In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these...
Read more >
Using Machine Learning to Enable Material Substitutions in
questions can be derived from the results for HCI. ... to label the data without providing any quantitative value. For example, the data...
Read more >
BERT Fine-Tuning Tutorial with PyTorch - Chris McCormick
Revised on 3/20/20 - Switched to tokenizer.encode_plus and added validation ... NLP model that quickly gives you state of the art results.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found