'BertTokenizer' object has no attribute 'prepare_for_model'
See original GitHub issuesentence_features = self.sentence_encoder.get_sentence_features(text, longest_seq)
File "/usr/local/lib/python3.6/dist-packages/sentence_transformers/SentenceTransformer.py", line 179, in get_sentence_features
return self._first_module().get_sentence_features(*features)
File "/usr/local/lib/python3.6/dist-packages/sentence_transformers/models/BERT.py", line 64, in get_sentence_features
return self.tokenizer.prepare_for_model(tokens, max_length=pad_seq_length, pad_to_max_length=True, return_tensors='pt')
AttributeError: 'BertTokenizer' object has no attribute 'prepare_for_model'
I was using a specific piece of code for many weeks and it was working fine. Just today I am experiencing this error.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:4
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Getting Error while adding new tokens in vocab - Beginners
AttributeError Traceback (most recent call last) ... AttributeError: 'BertMultiClassifier' object has no attribute 'resize_token_embeddings'.
Read more >Train BERT model from scratch on a different language
But it gives me an error AttributeError: 'tokenizers.Tokenizer' object has no attribute 'mask_token' "This tokenizer does not have a mask ...
Read more >RoBERTa_Bert_tokenizer_train_...
Explore and run machine learning code with Kaggle Notebooks | Using data from multiple data sources.
Read more >Sentiment Analysis with BERT and Transformers by Hugging ...
An additional objective was to predict the next sentence. ... 2from transformers import BertModel, BertTokenizer, AdamW, ...
Read more >BERT Fine-Tuning Tutorial with PyTorch - Chris McCormick
In this tutorial I'll show you how to use BERT with the huggingface PyTorch ... 'BertTokenizer' object has no attribute 'encode_plus'.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@Aniket-Pradhan yes that’s what I guessed, mainly raised the issue for the authors. For people facing similar issue, they can
pip install transformers==2.11.0
after doingpip install sentence-transformers
to downgrade to a version that works for now.It’s because of the new version of transformers. For a quick solution, you could use the previous version of the transformers library. For the actual solution, we must wait for the authors to update the tokenizer API here.