question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Question: How to use even sized kernels with Monai Convolution block?

See original GitHub issue

Hi All, I am working on rewriting MC_GAN with the MONAI framework and have a question with the Monai Convolution block. Why are even number kernels not supported in the Monai Convolution block?

I am running into this problem with the block when my kernel size is an even number. The monai.networks.layers.convutils.same_padding() func called by Convolution().__init__ is throwing a NotImplementedError:

# from monai.networks.layers.convutils.same_padding()

    if np.any((kernel_size_np - 1) * dilation % 2 == 1):
        raise NotImplementedError(
            f"Same padding not available for kernel_size={kernel_size_np} and dilation={dilation_np}."
        )

    padding_np = (kernel_size_np - 1) / 2 * dilation_np

Is there an implementation planned for this in the future? Why is there no manual padding option in the Convolution block constructor?

I did some value testing and padding_np is an integer when k_size is odd, and ends in 0.5 when k_size is even.

For reference these are the nn.Sequential() blocks:

# Original PyTorch source I am replacing:
            torch.nn.Conv2d(in_chan, out_chan, kernel_size=4, stride=2, padding=1, bias=False)
            torch.nn.BatchNorm2d(out_chan)
            torch.nn.LeakyReLU(0.2, inplace=True)
# What I want to use: Monai Convolution Block
            Convolution(2, in_chan, out_chan, kernel_size=4, strides=1, bias=False, act=self.Act, norm=self.Norm)
# What I am using: Monai LayerFactory
            Conv["conv", 2](in_chan, out_chan, 4, 2, 1, bias=False),
            Norm["batch", 2](out_chan),
            Act["leakyrelu"](0.2, inplace=True),

Thank you for any help.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:9 (9 by maintainers)

github_iconTop GitHub Comments

2reactions
ericspodcommented, Aug 24, 2020

The problem though is the padding for Conv2d isn’t sufficient to produce an output with the same shape given an even-sized kernel and a stride of 1:

t=torch.rand(1,1,64,64)

c=nn.Conv2d(1,1,4,1,padding=(0,0))
print(c(t).shape)  # 1x1x61x61

c=nn.Conv2d(1,1,4,1,padding=(1,1))
print(c(t).shape)  # 1x1x63x63

c=nn.Conv2d(1,1,4,1,padding=(2,2),)
print(c(t).shape)  # 1x1x65x65

Choosing to pad the input beforehand as well as pad in Conv2d can do this but when different stride or dilation values are used it becomes quite difficult to calculate what that pad should be. For example with a stride of 2 the expected output would be halved in every spatial dimension regardless of kernel size or dilation value, figuring out how to pad the input to do this isn’t clear to me.

1reaction
gagandaroachcommented, Aug 26, 2020

Thank you, this will be helpful.

The initial conv in MC GAN G/D does not use normalization but does use activation. Subsequent blocks use both.

I think this issue can be closed now.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Source code for monai.networks.blocks.convolutions
Defaults to 1. kernel_size: convolution kernel size. ... Defaults to True. conv_only: whether to use the convolutional layer only.
Read more >
How to choose the size of the convolution filter or Kernel size ...
Basically, We divide kernel sizes into smaller and larger ones. Smaller kernel sizes consists of 1x1, 2x2, 3x3 and 4x4, whereas larger one ......
Read more >
Understanding and visualizing DenseNets | by Pablo Ruiz
The main purpose is to give insight to understand DenseNets and go deep into DenseNet-121 for ImageNet dataset. Densely Connected Convolutional Networks [1] ......
Read more >
arXiv:2210.15949v1 [eess.IV] 28 Oct 2022
fined weights) to the second encoder blocks of U-Nets (in ... of different sizes. The use of even shaped kernels for convolution operation....
Read more >
CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops
By using a loop with stride equal to the grid size, ... Moreover, you can limit the number of blocks you use to...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found