[Feat] Discrete Gaussian kernel (gaussian) should be adapted for discrete filtering
See original GitHub issue🐛 Bug
The current Gaussian kernel simply samples the continuous Gaussian function:
def gaussian(window_size, sigma):
x = torch.arange(window_size).float() - window_size // 2
if window_size % 2 == 0:
x = x + 0.5
gauss = torch.exp((-x.pow(2.0) / float(2 * sigma ** 2)))
return gauss / gauss.sum()
Expected behavior
Discrete Gaussian filtering should be compatible with scale-space theory and rely on modified Bessel function for its kernel weights. See e.g.: https://en.wikipedia.org/wiki/Scale_space_implementation#The_discrete_Gaussian_kernel
Additional context
Note that this is a similar issue to that raised in MONAI: https://github.com/Project-MONAI/MONAI/issues/363
I came here while looking for a PyTorch implementation of a Discrete Gaussian filter but realised this one was also sampling the continuous Gaussian function.
ITK implements such a behaviour in itk::GaussianOperator: https://github.com/InsightSoftwareConsortium/ITK/blob/master/Modules/Core/Common/include/itkGaussianOperator.hxx
Wikipedia also provides a basic python implementation (albeit relying on scipy and without normalisation):
from scipy.special import iv
import numpy as np
def discrete_gaussian_kernel(n, t):
T = np.exp(-t) * iv(n, t)
return T
Issue Analytics
- State:
- Created 3 years ago
- Comments:17 (11 by maintainers)
Top GitHub Comments
I will check the actual difference. In case of we are doing the change, we also would need to modify our spatial_gradient kernels. And it should help with local features localization precision as well.
@wyli that sounds good we could try to target to include this feature for v0.5