question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

No such file or directory while opening './4-gram.arpa.gz' KenLM

See original GitHub issue

Hi @leo19941227, Could you please help with this error? Is there anything to consider while installing the KENLM? I followed the link here for the installation https://medium.com/tekraze/install-kenlm-binaries-on-ubuntu-language-model-inference-tool-33507000f33. I have checked and this file exist in my system (I have placed it in the same root to make sure the problem is not with the path)

Traceback (most recent call last):
  File "run_downstream.py", line 206, in <module>
    main()
  File "run_downstream.py", line 201, in main
    runner = Runner(args, config)
  File "/home/ai-labs/Desktop/ASR/s3prl/s3prl/downstream/runner.py", line 52, in __init__
    self.downstream = self._get_downstream()
  File "/home/ai-labs/Desktop/ASR/s3prl/s3prl/downstream/runner.py", line 129, in _get_downstream
    model = Downstream(
  File "/home/ai-labs/Desktop/ASR/s3prl/s3prl/downstream/asr/expert.py", line 99, in __init__
    self.decoder = get_decoder(decoder_args, self.dictionary)
  File "/home/ai-labs/Desktop/ASR/s3prl/s3prl/downstream/asr/expert.py", line 36, in get_decoder
    return W2lKenLMDecoder(decoder_args, dictionary)
  File "/home/ai-labs/Desktop/ASR/s3prl/s3prl/downstream/asr/w2l_decoder.py", line 127, in __init__
    self.lm = KenLM('./4-gram.arpa.gz', self.word_dict)
RuntimeError: /home/ai-labs/Desktop/ASR/kenlm/util/file.cc:76 in int util::OpenReadOrThrow(const char*) threw ErrnoException because `-1 == (ret = open(name, 00))'.
No such file or directory while opening ./4-gram.arpa.gz

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
leo19941227commented, Oct 13, 2022

You should use --mode evaluate instead of --mode inference

1reaction
leo19941227commented, Oct 13, 2022

Can you try to fill the config field with the absolute path?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Invalid n-gram in ARPA · Issue #247 · kpu/kenlm - GitHub
Hello, everyone. I trained a language model using this KenLM toolkit on a large corpus(>150G), then pruned at some threshold such as 5e-9....
Read more >
Boosting Wav2Vec2 with n-grams in Transformers
This blog post is a step-by-step technical guide to explain how one can create an n-gram language model and combine it with an...
Read more >
Creating a n-gram Language Model using Wikipedia
TLDR: This post describes how to train a n-gram Language Model of any order using Wikipedia articles. The code used is available from...
Read more >
LibriSpeech language models - openslr.org
LibriSpeech language models, vocabulary and G2P models ... About this resource: Language modeling resources to be used in conjunction with the (soon-to-be- ...
Read more >
subject:"\[Moses\-support\] ERROR" - The Mail Archive
I get a similar error compiling on a WSL2 environment but i know the >> compile itself has succeeded: >> >> No such...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found