question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Decoder emits <unk> tokens even on a closed vocabulary task

See original GitHub issue

Should we introduce an option to exclude the <unk> token from a vocabulary? This feature would be used in combination with Byte-pair encoding. One would expect the decoder to quickly learn not to emit <unk> tokens, but it seems it’s not that certain.

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:13 (13 by maintainers)

github_iconTop GitHub Comments

1reaction
jindrahelclcommented, Sep 5, 2016

#82 introduces the unk_sample_prob parameter of the vocabulary which is set by default to 0 when loading vocabulary from BPE merge file, to 0.5 when loading the vocabulary from a dataset and to whatever the value was when loading from pickle.

0reactions
jindrahelclcommented, Dec 13, 2016

Ty jo, nevim… Ale rozhodně můžeš teďkon tu unk_sample_prob nastavit z konfiguráku. Takže jo. Já bych to zavřel a udělal issue, aby na to vznikly unit_testy, který mi teď přijdou jako jedna z nejvyšších priorit.

Read more comments on GitHub >

github_iconTop Results From Across the Web

What's the point to have a UNK token for out of vocabulary ...
Adding a UNK token to the vocabulary is a conventional way to handle oov words in tasks of NLP. It is totally understandable...
Read more >
Auto-Complete: Pre-Process the Data II | Neurotic Networking
Convert all the other words that are not part of the closed vocabulary to the token 'unk'. Create a function that takes in...
Read more >
Decoding Word Embeddings with Brain-Based Semantic ...
Some probing tasks focus on static embeddings, whereas others target the token vectors produced by contextualized embeddings.
Read more >
The Application of Hidden Markov Models in Speech ...
Hidden Markov Models (HMMs) provide a simple and effective frame- work for modelling time-varying spectral vector sequences. As a con- sequence, almost all ......
Read more >
Answers to Exercises - Springer Link
codewords since “start” and “stop” are so close, but there are many codes ... The decoder simply reads tokens and uses each offset...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found