Variable rate phase vocoding
See original GitHub issuephase_vocoder
currently accepts a single scalar parameter controlling the rate of time dilation. In principle, it shouldn’t be difficult to either vectorize this, or support a slice+rate parameterization so that different time regions can be accelerated by different rates.
Issue Analytics
- State:
- Created 7 years ago
- Comments:7 (7 by maintainers)
Top Results From Across the Web
Universal Vocoder Using Variable Data Rate Vocoding - DTIC
The initial NRL variable data rate (VDR) vocoder ... amplitude and phase and are represented on a unit circle. The 9-bit constellation in ......
Read more >Synthesis Chapter Four: Phase Vocoding
Phase vocoding (pv) is a form of synthesis by analysis in that a digital sound file or stored buffer is analyzed and is...
Read more >Phase Vocoder Tutorial
A phase vocoder works by analyzing the input signal using the FFT, which decomposes the signal into frequency components. The basic assumption is...
Read more >The Phase Vocoder: A Tutorial - jstor
natural sounds, the phase vocoder is a digital signal ... (i.e., sampled) signal with a sampling rate R of at ... The frequency...
Read more >Variable rate vocoder - QualComm Incorporated
An apparatus and method for performing speech signal compression, by variable rate coding of frames of digitized speech samples.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Once you get to variable rates, you have to make sure you are clear about how the time base is defined, because now the input and output time bases have a nonlinear relationship. What you really want is the mapping from output timebase to input timebase, since it’s not infeasible to want reversals and loops (non-monotonic relationship). If you accept a vector of per-input-window time scale, then effectively you’re specifying how much time each fixed-size input window will occupy in the output, and it’s hard to predict how long the output will be.
I think accepting a vector of (input time, output time) (defining a piecewise-linear relationship) tuples is a nice, neutral description. You could then programmatically figure out the slope (time scaling) and time center required from the input signal for each output frame.
On Sun, Feb 19, 2017 at 10:35 AM, Brian McFee notifications@github.com wrote:
🛎️