Question: How to use even sized kernels with Monai Convolution block?
See original GitHub issueHi All, I am working on rewriting MC_GAN with the MONAI framework and have a question with the Monai Convolution block. Why are even number kernels not supported in the Monai Convolution block?
I am running into this problem with the block when my kernel size is an even number. The monai.networks.layers.convutils.same_padding()
func called by Convolution().__init__
is throwing a NotImplementedError:
# from monai.networks.layers.convutils.same_padding()
if np.any((kernel_size_np - 1) * dilation % 2 == 1):
raise NotImplementedError(
f"Same padding not available for kernel_size={kernel_size_np} and dilation={dilation_np}."
)
padding_np = (kernel_size_np - 1) / 2 * dilation_np
Is there an implementation planned for this in the future? Why is there no manual padding option in the Convolution block constructor?
I did some value testing and padding_np is an integer when k_size is odd, and ends in 0.5 when k_size is even.
For reference these are the nn.Sequential()
blocks:
# Original PyTorch source I am replacing:
torch.nn.Conv2d(in_chan, out_chan, kernel_size=4, stride=2, padding=1, bias=False)
torch.nn.BatchNorm2d(out_chan)
torch.nn.LeakyReLU(0.2, inplace=True)
# What I want to use: Monai Convolution Block
Convolution(2, in_chan, out_chan, kernel_size=4, strides=1, bias=False, act=self.Act, norm=self.Norm)
# What I am using: Monai LayerFactory
Conv["conv", 2](in_chan, out_chan, 4, 2, 1, bias=False),
Norm["batch", 2](out_chan),
Act["leakyrelu"](0.2, inplace=True),
Thank you for any help.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:9 (9 by maintainers)
Top Results From Across the Web
Source code for monai.networks.blocks.convolutions
Defaults to 1. kernel_size: convolution kernel size. ... Defaults to True. conv_only: whether to use the convolutional layer only.
Read more >How to choose the size of the convolution filter or Kernel size ...
Basically, We divide kernel sizes into smaller and larger ones. Smaller kernel sizes consists of 1x1, 2x2, 3x3 and 4x4, whereas larger one ......
Read more >Understanding and visualizing DenseNets | by Pablo Ruiz
The main purpose is to give insight to understand DenseNets and go deep into DenseNet-121 for ImageNet dataset. Densely Connected Convolutional Networks [1] ......
Read more >arXiv:2210.15949v1 [eess.IV] 28 Oct 2022
fined weights) to the second encoder blocks of U-Nets (in ... of different sizes. The use of even shaped kernels for convolution operation....
Read more >CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops
By using a loop with stride equal to the grid size, ... Moreover, you can limit the number of blocks you use to...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The problem though is the padding for
Conv2d
isn’t sufficient to produce an output with the same shape given an even-sized kernel and a stride of 1:Choosing to pad the input beforehand as well as pad in
Conv2d
can do this but when different stride or dilation values are used it becomes quite difficult to calculate what that pad should be. For example with a stride of 2 the expected output would be halved in every spatial dimension regardless of kernel size or dilation value, figuring out how to pad the input to do this isn’t clear to me.Thank you, this will be helpful.
The initial conv in MC GAN G/D does not use normalization but does use activation. Subsequent blocks use both.
I think this issue can be closed now.