Try Monotonic Attention
See original GitHub issueI tried Monotonic attention and got better result (alignment is clear at step 10k).
- update to TensorFlow 1.3
- modify models/tacotron.py
BahdanauAttention(256, encoder_outputs) -> BahdanauMonotonicAttention(256, encoder_outputs)
Issue Analytics
- State:
- Created 6 years ago
- Reactions:5
- Comments:16 (1 by maintainers)
Top Results From Across the Web
MONOTONIC CHUNKWISE ATTENTION - Colin Raffel
This mechanism has an efficient training-time algorithm and enjoys online and linear-time decoding at test time. We attempt to quantify the resulting speedup ......
Read more >Online and Linear-Time Attention by Enforcing Monotonic ...
we propose an end-to-end differentiable method for learning monotonic alignments which, at test time, enables computing attention online and in linear time.
Read more >Monotonic Chunkwise Attention - OpenReview
An online and linear-time attention mechanism that performs soft attention over adaptively-located chunks of the input sequence.
Read more >Tutorial on Attention-based Models (Part 2) - Karan Taneja
In this post, I'm going to discuss about various monotonic attention mechanisms. 3.2. Monotonic Alignments. Monotonic alignments are motivated ...
Read more >Deep Learning Text Corrector using Monotonic Attention (with ...
Seq2Seq model with monotonic Attention. ... Whereas in the attention mechanism, we try to mimic how humans process sequential information.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
results after 125k (This repo)
step-550k.zip step-883k.zip
multi-speaker results (my implementation)
read.zip
Yuxuan
the first author of Tacotron said that they also use Monotonic Attention in their newest version. They showedparagraph synthesis(more than 400 chars)
.