ENH overlap-add convolutions
See original GitHub issueI think this is standard practice for 1D signals, because they often have to deal with very long signals wrt filter size, but this can be applied across the board: Imagine a very big 2D image, or, more realistically, a 3D array which is actually a video of 2D frames which you want to analyze with spatiotemporal gabors. This will require overlap-add in the time axis.
So the general setup might be something where you specify the axes that you want to have overlap-add in, and it could be a WaveletTransform
or OverlapAddConvolution
module that takes care of it.
Issue Analytics
- State:
- Created 5 years ago
- Comments:6
Top Results From Across the Web
Overlap–add method - Wikipedia
In signal processing, the overlap–add method is an efficient way to evaluate the discrete convolution of a very long signal x [ n...
Read more >12.2: Fast Convolution by Overlap-Add and Overlap-Save
An alternative approach to the Overlap-Add can be developed by starting with segmenting the output rather than the input. If one considers the ......
Read more >The Overlap-Add Method and FFT Convolution - EE Times
FFT convolution uses the overlap-add method together with the Fast Fourier Transform, allowing signals to be convolved by multiplying their frequency spectra.
Read more >FFT convolution with overlap add - comp.dsp - DSPRelated.com
The method described is breaking a signal with a big number of points > into small segments. Then it is convoluted by a...
Read more >Implementation of conv() Uses Overlap and Save While Direct ...
The current implementation uses in many cases the Overlap and Save method for Linear Convolution. While this method is optimized to a ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The other potential advantage here, from what I understand, is that it may give us a way to compute filters independent of image size, since we’re storing them in the spatial/time domain.
Working with 1D data of shape
(384, 300000)
is showing me the limitations of not having this implemented. My GPU maxes out in memory at(30, 300000)
, which, if we made this into one long line, would correspond to like 400s of mono audio at 22050. It might be nice to be able to go a bit above that without having to manually chunk data etc