Longest sequence and truncation of sentence
See original GitHub issueHi, I wonder how the maximum length is set before getting an embedding given a sentence. Let s be a sentence such as s = [x1, x2, x3, ----, xN]. Is there a maximum length parameter n such that if N>n, then all tokens in indices above n are removed? s would be mapped to map(s) = [x1, x2, —,xn] (This what we can see often in BERT-like models).
From this code:
longest_seq = 0
for idx in length_sorted_idx[batch_start: batch_end]:
sentence = sentences[idx]
tokens = self.tokenize(sentence)
longest_seq = max(longest_seq, len(tokens))
batch_tokens.append(tokens)
features = {}
for text in batch_tokens:
sentence_features = self.get_sentence_features(text, longest_seq)
I am confused about what get_sentence_features does which is defined here (I do not get what _first_module corresponds to actually):
def get_sentence_features(self, *features):
return self._first_module().get_sentence_features(*features)
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Padding and truncation - Hugging Face
Truncation works in the other direction by truncating long sequences. In most cases, padding your batch to the length of the longest sequence...
Read more >Truncate Sentence - LeetCode
A sentence is a list of words that are separated by a single space with no leading or trailing spaces. Each of the...
Read more >How does max_length, padding and truncation arguments ...
max_length=5 will keep all the sentences as of length 5 strictly; padding=max_length will add a padding of 1 to the third sentence; truncate= ......
Read more >How to Apply Transformers to Any Length of Text
Restore the power of NLP for long sequences ... transformer models) will consume 512 tokens max — truncating anything beyond this length.
Read more >Algorithm to Truncate Sentence
In C++ we can split the string into words or simply use istringstream to parse the tokens. Then we can append the first...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hi @dataislife BERT like models have a limit of usually 512 tokens. In the sentence transformer models, you can set your own limit, which is usually set to 128 tokens.
A sentence is broken down to tokens and word pieces. Anything above the limit (e.g. 128) is truncated, i.e., only the first 128 word pieces are used in the default setting.
Best Nils
@nreimers Thanks.