New RESULTS style
See original GitHub issueI think the result information should be more structured. I’m thinking of the following format (voxforge/asr1/RESULTS.md
) so that we could also attach the model files corresponding to the result. Also, we try to put the link of the model files here.
Any thoughts?
Transformer 300 epochs, decoder 6 layer 2048 units
- config file:
conf/tuning/train_pytorch_transformer_d6-2048.yaml
- system information
$ uname -a
Linux b14 4.9.0-6-amd64 #1 SMP Debian 4.9.82-1+deb9u3 (2018-03-02) x86_64 GNU/Linux
- python version
$ . ./path.sh; python --version
Python 3.7.3
- Git hash
$ git log | head -n 1 | awk '{print $2}'
5f72850ea313dc18fc0518fa4f3a95c3d8b44f09
- cmvn
- recog_model
- lang_model
- It takes a very long time for the decoding and I don’t recommend to use this setup without speed improvement during decoding
write a CER (or TER) result in exp/tr_it_pytorch_train_d6-2048/decode_dt_it_decode/result.txt
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 1082 79133 | 92.5 3.8 3.7 1.9 9.4 95.0 |
write a CER (or TER) result in exp/tr_it_pytorch_train_d6-2048/decode_et_it_decode/result.txt
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 1055 77966 | 92.6 3.7 3.7 1.7 9.1 95.6 |
Transformer 300 epochs, decoder 1 layer 1024 units
- config file:
conf/tuning/train_pytorch_transformer.yaml
- system information
$ uname -a
Linux b14 4.9.0-6-amd64 #1 SMP Debian 4.9.82-1+deb9u3 (2018-03-02) x86_64 GNU/Linux
- python version
$ . ./path.sh; python --version
Python 3.7.3
- Git hash
$ git log | head -n 1 | awk '{print $2}'
5f72850ea313dc18fc0518fa4f3a95c3d8b44f09
- cmvn
- recog_model
- lang_model
write a CER (or TER) result in exp/tr_it_pytorch_ep300pa10/decode_dt_it_decode/result.txt
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 1082 79133 | 92.0 3.9 4.1 1.8 9.8 96.2 |
write a CER (or TER) result in exp/tr_it_pytorch_ep300pa10/decode_et_it_decode/result.txt
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 1055 77966 | 92.1 3.9 4.0 1.7 9.6 95.7 |
Transformer 100 epochs
shinji@b14:/export/a08/shinji/201707e2e/espnet_dev6/egs/voxforge/asr2$ grep -e Avg -e SPKR -m 2 exp/tr_it_pytorch_nopatience/decode_dt_it_decode/result.txt
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 1082 79133 | 90.1 4.2 5.7 2.3 12.2 98.6 |
shinji@b14:/export/a08/shinji/201707e2e/espnet_dev6/egs/voxforge/asr2$ grep -e Avg -e SPKR -m 2 exp/tr_it_pytorch_nopatience/decode_et_it_decode/result.txt
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 1055 77966 | 89.7 4.4 6.0 2.2 12.5 99.1 |
RNN default
- change several update including ctc/attention decoding, label smoothing, and fixed search parameters
write a CER (or TER) result in exp/tr_it_debug_alpha0.5/decode_dt_it_beam20_eacc.best_p0_len0.0-0.0_ctcw0.5/result.txt
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 1082 79133 | 89.6 5.5 5.0 2.5 12.9 98.2 |
write a CER (or TER) result in exp/tr_it_debug_alpha0.5/decode_et_it_beam20_eacc.best_p0_len0.0-0.0_ctcw0.5/result.txt
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 1055 77966 | 89.7 5.5 4.8 2.3 12.6 98.4 |
Scheduled sampling experiments by enabling mtlalpha=0.0 and scheduled-sampling-ratio with 0.0 and 0.5
- Number of decoder layers = 1
exp/tr_it_vggblstmp_e4_subsample1_2_2_1_1_unit320_proj320_d1_unit300_location_aconvc10_aconvf100_mtlalpha0.0_adadelta_sampratio0.0_bs30_mli800_mlo150_epochs30/decode_et_it_beam20_eacc.best_p0_len0.0-0.0_ctcw0.0/result.txt:
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 895 66163 | 29.4 21.5 49.2 4.2 74.8 100.0 |
exp/tr_it_vggblstmp_e4_subsample1_2_2_1_1_unit320_proj320_d1_unit300_location_aconvc10_aconvf100_mtlalpha0.0_adadelta_sampratio0.5_bs30_mli800_mlo150_epochs30/decode_et_it_beam20_eacc.best_p0_len0.0-0.0_ctcw0.0/result.txt:
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 895 66163 | 88.0 6.7 5.3 3.0 15.0 98.7 |
- Number of decoder layers = 2
exp/tr_it_vggblstmp_e4_subsample1_2_2_1_1_unit320_proj320_d2_unit300_location_aconvc10_aconvf100_mtlalpha0.0_adadelta_sampratio0.0_bs30_mli800_mlo150_epochs30/decode_et_it_beam20_eacc.best_p0_len0.0-0.0_ctcw0.0/result.txt:
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 895 66163 | 30.7 22.1 47.2 3.9 73.2 100.0 |
exp/tr_it_vggblstmp_e4_subsample1_2_2_1_1_unit320_proj320_d2_unit300_location_aconvc10_aconvf100_mtlalpha0.0_adadelta_sampratio0.5_bs30_mli800_mlo150_epochs30/decode_et_it_beam20_eacc.best_p0_len0.0-0.0_ctcw0.0/result.txt:
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 895 66163 | 36.4 30.3 33.4 9.1 72.8 100.0 |
change several update including ctc/attention decoding, label smoothing, and fixed search parameters
write a CER (or TER) result in exp/tr_it_debug_alpha0.5/decode_dt_it_beam20_eacc.best_p0_len0.0-0.0_ctcw0.5/result.txt
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 1082 79133 | 89.6 5.5 5.0 2.5 12.9 98.2 |
write a CER (or TER) result in exp/tr_it_debug_alpha0.5/decode_et_it_beam20_eacc.best_p0_len0.0-0.0_ctcw0.5/result.txt
| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
| Sum/Avg | 1055 77966 | 89.7 5.5 4.8 2.3 12.6 98.4 |
change minlenratio from 0.0 to 0.2
exp/tr_it_d1_debug_chainer/decode_dt_it_beam20_eacc.best_p0_len0.2-0.8/result.txt:| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
exp/tr_it_d1_debug_chainer/decode_dt_it_beam20_eacc.best_p0_len0.2-0.8/result.txt:| Sum/Avg | 1082 79133 | 88.3 6.1 5.6 3.2 14.9 98.9 |
exp/tr_it_d1_debug_chainer/decode_et_it_beam20_eacc.best_p0_len0.2-0.8/result.txt:| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
exp/tr_it_d1_debug_chainer/decode_et_it_beam20_eacc.best_p0_len0.2-0.8/result.txt:| Sum/Avg | 1055 77966 | 88.4 6.0 5.6 2.9 14.5 98.9 |
change NStepLSTM to StatelessLSTM
$ grep -e Avg -e SPKR -m 2 exp/tr_it_a02/decode_*t_it_beam20_eacc.best_p0_len0.0-0.8/result.txt
exp/tr_it_a02/decode_dt_it_beam20_eacc.best_p0_len0.0-0.8/result.txt:| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
exp/tr_it_a02/decode_dt_it_beam20_eacc.best_p0_len0.0-0.8/result.txt:| Sum/Avg | 1080 78951 | 87.7 5.7 6.6 2.9 15.2 97.7 |
exp/tr_it_a02/decode_et_it_beam20_eacc.best_p0_len0.0-0.8/result.txt:| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
exp/tr_it_a02/decode_et_it_beam20_eacc.best_p0_len0.0-0.8/result.txt:| Sum/Avg | 1050 77586 | 87.3 5.8 6.9 2.8 15.5 97.5 |
VGGBLSMP, adaeldta with eps decay monitoring validation accuracy
$ grep Avg exp/tr_it_a10/decode_*t_it_beam20_eacc.best_p0_len0.0-0.8/result.txt
exp/tr_it_a10/decode_dt_it_beam20_eacc.best_p0_len0.0-0.8/result.txt:| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
exp/tr_it_a10/decode_dt_it_beam20_eacc.best_p0_len0.0-0.8/result.txt:| Sum/Avg | 1080 78951 | 86.7 5.9 7.3 3.2 16.5 98.1 |
exp/tr_it_a10/decode_et_it_beam20_eacc.best_p0_len0.0-0.8/result.txt:| SPKR | # Snt # Wrd | Corr Sub Del Ins Err S.Err |
exp/tr_it_a10/decode_et_it_beam20_eacc.best_p0_len0.0-0.8/result.txt:| Sum/Avg | 1050 77586 | 86.3 5.6 8.1 2.8 16.5 98.3 |
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (4 by maintainers)
Top Results From Across the Web
Customize or create new styles - Microsoft Support
On the Home tab, in the Styles group, right-click the style that you want to change, and then click Update [Style Name] to...
Read more >How to Write a Results Section | Tips & Examples - Scribbr
In the results section, concisely present the main findings and observe how they relate to your research questions or hypotheses.
Read more >Leadership That Gets Results - Harvard Business Review
The research indicates that leaders who get the best results don't rely on just one leadership style; they use most of the styles...
Read more >General Format - Purdue OWL
Note: This page reflects the latest version of the APA Publication Manual (i.e., ... The equivalent resource for the older APA 6 style...
Read more >8 Common Leadership Styles (Plus How To Find Your Own)
Learn about 8 common types of leadership styles and which might best suit your ... By using Indeed you agree to our new...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Nice. Why don’t you make script to show the above information automatically? e.g.
This issue is closed. Please re-open if needed.