Add fast wavenet generation.
See original GitHub issue@ibab I noticed you saw the efficient wavenet generation implementation I wrote with my friends:
https://github.com/tomlepaine/fast-wavenet
Can we help you add it to tensorflow-wavenet
?
Issue Analytics
- State:
- Created 7 years ago
- Comments:7 (7 by maintainers)
Top Results From Across the Web
[1611.09482] Fast Wavenet Generation Algorithm - arXiv
Abstract: This paper presents an efficient implementation of the Wavenet generation process called Fast Wavenet.
Read more >Fast Wavenet: An efficient Wavenet generation implementation
Our implementation speeds up Wavenet generation by eliminating redundant convolution operations. A naive implementation of Wavenet generation is O(2^L), while ...
Read more >Fast Wavenet Generation Algorithm | Papers With Code
Fast Wavenet Generation Algorithm ... This paper presents an efficient implementation of the Wavenet generation process called Fast Wavenet.
Read more >(PDF) Fast Wavenet Generation Algorithm - ResearchGate
PDF | This paper presents an efficient implementation of the Wavenet generation process called Fast Wavenet.
Read more >Parallel WaveNet: Fast High-Fidelity Speech Synthesis
The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, a 1000x speed up relative...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Update: Just about finished with the port. Will clean up the code and run some speed tests tonight, then make a pull request.
Sorry for the delay!
It seems like the way we implemented our queue had a bug that made the states we pop incorrect a small percent of the time. We aren’t sure of the exactly cause, but we think it is related to
tf.scatter_update
. We switched from our queue totf.FIFOQueue
and it seemed to fix the bug (and slightly improved our speed 🎉). Now (based on our tests) the fast generation exactly reproduces outputs and states of the slow generation.We will make a pull request after lunch.