BNInception architecture
See original GitHub issueSeems like there is a mistake in BNInception architecture after the 29th Oct commit. I try to use its convolutional part as a pretrained model for transfer learning and get this during the forward pass:
RuntimeError: given groups=1, weight of size [64, 192, 1, 1], expected input[1, 64, 8, 8] to have 192 channels, but got 64 channels instead
Issue Analytics
- State:
- Created 5 years ago
- Comments:5 (1 by maintainers)
Top Results From Across the Web
A Simple Guide to the Versions of the Inception Network
Using the dimension reduced inception module, a neural network architecture was built. This was popularly known as GoogLeNet (Inception v1).
Read more >Inception Network | Implementation Of GoogleNet In Keras
The paper proposes a new type of architecture – GoogLeNet or Inception v1. It is basically a convolutional neural network (CNN) which is...
Read more >Rethinking the Inception Architecture for Computer Vision - arXiv
Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably...
Read more >Short history of the Inception deep learning architecture
This paper introduces the Inception v1 architecture, implemented in the winning ILSVRC 2014 submission GoogLeNet. The main contribution with respect to Network ...
Read more >Understanding Architecture Of Inception Network & Applying It ...
Inception networks were created with the idea of increasing the capability of a deep neural network while efficiently using computational resources. I quote...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@Cadene He maybe uses fastai…
please ignore above layers here is error self.inception_3a_3x3_bn = nn.BatchNorm2d(64, affine=True) self.inception_3a_relu_3x3 = nn.ReLU (inplace) self.inception_3a_double_3x3_reduce = nn.Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1))
Batch norm would return only 64 channels of certain sizes ,reduce layer needs 192 channels