question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Alphabet conversion from Hugging Faces do not work

See original GitHub issue

Following the tutorial:

from pyctcdecode import Alphabet, BeamSearchDecoderCTC

vocab_dict = {'<pad>': 0, '<s>': 1, '</s>': 2, '<unk>': 3, '|': 4, 'E': 5, 'T': 6, 'A': 7, 'O': 8, 'N': 9, 'I': 10, 'H': 11, 'S': 12, 'R': 13, 'D': 14, 'L': 15, 'U': 16, 'M': 17, 'W': 18, 'C': 19, 'F': 20, 'G': 21, 'Y': 22, 'P': 23, 'B': 24, 'V': 25, 'K': 26, "'": 27, 'X': 28, 'J': 29, 'Q': 30, 'Z': 31}

# make alphabet
vocab_list = list(vocab_dict.keys())
# convert ctc blank character representation
vocab_list[0] = ""
# replace special characters
vocab_list[1] = "⁇"
vocab_list[2] = "⁇"
vocab_list[3] = "⁇"
# convert space character representation
vocab_list[4] = " "
# specify ctc blank char index, since conventially it is the last entry of the logit matrix
alphabet = Alphabet.build_bpe_alphabet(vocab_list, ctc_token_idx=0)

Results in:

ValueError: Unknown BPE format for vocabulary. Supported formats are 1) ▁ for indicating a space and 2) ## for continuation of a word.

I’m trying to use a HuggingFaces model with a KenLM decoding but I can’t get past this point. Thanks in advance.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:13 (9 by maintainers)

github_iconTop GitHub Comments

1reaction
flariutcommented, Jun 15, 2021

Georg, I think that was the culprit of my problem. After some tests now I can confirm the language model is being applied and the alpha and beta settings change a lot the results, so it’s up to my particular use case to test and tune. Thanks a lot for your time and hope my little inconvience at least serves to refine the tutorials 😃

0reactions
gkucskocommented, Jun 18, 2021

adding extra protection in #4

Read more comments on GitHub >

github_iconTop Results From Across the Web

Issue with Transformer notebook's Getting Started Tokenizers
I am trying to learn transformers on my own so where can I go to learn if Hugging Face Doc is not up...
Read more >
Translating using pre-trained hugging face transformers not ...
It is a translation model that only can translate, there is no need to add the instruction "translate English to Dutch" .
Read more >
Building a Pipeline for State-of-the-Art Natural Language ...
I'm an engineer at Hugging Face, main maintainer of tokenizes, ... that can help you work in many different steps of the NLP...
Read more >
Create a Tokenizer and Train a Huggingface RoBERTa Model ...
The benefit of this method is that it will start building its vocabulary from an alphabet of single chars, so all words will...
Read more >
Export Hugging Face models to Core ML and TensorFlow Lite
Or, the conversion may appear to succeed but the model does not work or produces incorrect outputs. The most common reasons for conversion...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found