encode_plus is not in GPT2 Tokenizer
See original GitHub issueIt seems you removed encode_plus, what is the successor? All the notebook includes inputs = tokenizer.encode_plus(text, return_tensors='pt', add_special_tokens=True)
which is wrong and raise an error.
Issue Analytics
- State:
- Created 4 years ago
- Comments:18 (8 by maintainers)
Top Results From Across the Web
Tokenizer - Hugging Face
Tokenizer. A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most of...
Read more >what's difference between tokenizer.encode ... - Stack Overflow
The tokenizer.encode_plus function combines multiple steps for us: ... Tokenize all of the sentences and map the tokens to thier word IDs.
Read more >What is the difference between batch_encode_plus() and ...
I have read documentations related to T5 Transformer model. While using T5Tokenizer I am kind of confused with tokenizing my sentences. Can ...
Read more >Understanding the GPT-2 Source Code Part 2 - Medium
description='Pre-encode text files into tokenized training set.', ... I'm not sure why they did not use the fire library here so if anyone ......
Read more >github.com-huggingface-transformers_-_2019-09-29_08-40-52
sentence2 = "His findings were not compatible with this research."inputs1 = tokenizer.encodeplus(sentence0, sentence1, addspecialtokens=True ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The latest repo is ok to re-run
Thanks 👍
I see. And the imports in your version are exactly the same as in the demo notebook? In that case I would recommend adding some debug statements to neuron_view,py, to see what is being returned from the model, e.g. the final line here: