RNNT modified_adaptive_expansion_search is too slow.
See original GitHub issueHi, I try to use modified_adaptive_expansion_search
with beam_size 5 to recognize one wav, which costs 52.9s, while ALSD
costs 0.87S. why the speed is so slow.
Issue Analytics
- State:
- Created a year ago
- Comments:5
Top Results From Across the Web
NeMo ASR Configuration Files - NVIDIA Documentation Center
NeMo ASR Configuration Files#. This section describes the NeMo configuration file setup that is specific to models in the ASR collection.
Read more >[1705.08639] Fast-Slow Recurrent Neural Networks - arXiv
Here, we address this challenge by proposing a novel recurrent neural network (RNN) architecture, the Fast-Slow RNN (FS-RNN).
Read more >IMPROVING RNN TRANSDUCER MODELING FOR END-TO ...
we propose better model structures so that we obtain RNN-T mod- ... be slow if small minibatch is used, therefore reducing memory cost....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi,
Can you give information? recipe, architecture, units, etc?
Nevertheless, I’m assuming you are modeling BPE or use a big vocab size? If so, the issue is known for ESPnet1 and due to
select_k_expansions(...)
, see hereI’ll fix it in ESPnet1 when I have time, for the moment please either use the changes from PR #4032 or use another time-synchronous algorithm.
It shouldn’t make a difference in practice but yes, it should be
recombine_hyps
+sorted
instead ofsorted
+recombine_hyps
(pruning should be applied at the end anyway). Will change that in #4032.