Toward a stable version
See original GitHub issueI think we have fixed many issues, and we can add a version 1.0 (or 0.1) as a stable version. Toward that we need to finish
- VGG2L for pytorch by @ShigekiKarita
- AN4 recipe by me
- AMI recipe
- swbd recipe
- fisher_swbd recipe
- LM integration @sw005320
- Attention/CTC joint decoding @takaaki-hori
- End detection
- Documentation by @sw005320 @kan-bayashi
- Modify L.embed to avoid the randomness @takaaki-hori
- Add WER scoring
- label smoothing by @takaaki-hori
- replace
_ilens_to_index
tonp.cumsum
- refactor main training and recognition to be independent of pytorch and chainer backends.
If you have any action items, please add them in this issue. Then, we can move to more research-related implementation.
Issue Analytics
- State:
- Created 6 years ago
- Comments:26 (24 by maintainers)
Top Results From Across the Web
MJ on Twitter: "As we move toward a stable version 6, API ...
As we move toward a stable version 6, API compatibility with version 4/5 is a top priority for us. Yes, there will be...
Read more >Progression towards a stable release · Issue #71 - GitHub
This is a action plan towards having a stable JupyterWith. The ideal goal of this plan is to have: JupyterWith as a part...
Read more >Moving towards a stable version | Snapshot - Vivaldi Browser
Releasing a new Snapshot version of the Vivaldi browser, including a number of regression fixes and a Chromium update.
Read more >How Can We Work Towards a Stable #Cyberspace4All?
How Can We Work Towards a Stable #Cyberspace4All? - English Version. Watch later. Share. Copy link. Info. Shopping. Tap to unmute.
Read more >Towards Stable and Efficient Training of Verifiably Robust ...
Abstract: Training neural networks with verifiable robustness guarantees is challenging. Several existing approaches utilize linear ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Guys, by combining LSTMLM and joint attention/CTC decoding, we finally get CER 5.3 -> 3.8, WER 14.7 -> 9.3 in the WSJ task!!! The nice thing is that we don’t have to set min/maxlength and penalty (all set to 0.0), while we might need to tune the CTC and LM weights (0.3 and 1.0, respectively, see #76). @kan-bayashi, can you play with LSTMLM and joint decoding with the TEDLIUM recipe? You can train LSTMLM by using text data by referring
tools/kaldi/egs/tedlium/s5_r2/local/ted_train_lm.sh
and simply usingThe results of tedlium with ctc joint decoding and lm rescoring are as follows:
for dev set, CER 12.6 -> 10.8, WER 24.8 -> 19.8 for test set, CER 11.9 -> 10.1, WER 23.4 -> 18.6