FFT backend for Filter2d
See original GitHub issue🚀 Feature
An FFT backend for kornia.filters.filter2d
.
Motivation
kornia.filters.filter2d
is very slow for large kernels as it currently only performs convolutions in the spatial domain with conv2d
.
Pitch
Convolution in the spatial domain is equivalent to multiplication in the Fourier domain. For large kernels, transforming both the kernel and the image to the fourier domain, multiplying, and inverting the FFT should yield a much faster filter2d
implementation for large kernels (in my experience - filters larger than 10x10 pixels probably benefit from a fourier transform).
Alternatives
Additional context
Issue Analytics
- State:
- Created a year ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Need Tensorflow/Keras equivalent for scipy signal.fftconvolve
I want to use scipy.signal.fftconvolve in Tensorflow/Keras, is there any way to do ...
Read more >tf.nn.conv2d | TensorFlow v2.11.0
Computes a 2-D convolution given input and 4-D filters tensors.
Read more >VPI - Vision Programming Interface: FFT
The FFT application outputs a spectrum representation of an input image, saving it into an image file on disk. The user can define...
Read more >Python OpenCV - Filter2D() Function - GeeksforGeeks
In a nutshell, with this function, we can convolve an image with the kernel (typically a 2d matrix) to apply a filter on...
Read more >tensorflow fourier transform
filter2D () returns the result immediately. ... Theano is the machine learning backend of Keras. Fourier transform ... There is no .fft in...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@oliland that would be an awesome contribution. I guess we are targeting something like this: https://stackoverflow.com/questions/60561933/verify-convolution-theorem-using-pytorch
A benchmark should be included. I also noticed some slow downs when using large kenels in gaussian blur and similar
Yes, but for others it would be even more complicated, we would have to have many ifs per backend per kernels size then I think that CPU conv is done via im2col https://discuss.pytorch.org/t/how-was-conv2d-implemented-in-pytorch/35223