Activation selection within the bottlenecks in the network
See original GitHub issueif relu:
activation = nn.ReLU()
else:
activation = nn.PReLU()
Does doing this ensure that PReLU weights are unique for each instance of activation
within the bottlenecks? While trying to trace this network with torch.jit
, it gives errors regarding shared weights by nn.PReLU
layers within the submodules. Perhaps this should be implemented with copy.deepcopy
for all instances?
To follow the original paper more closely, the number of channels can be specified for each PReLU instance to learn a weight per channel as shown here.
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
What is a Network Bottleneck? - TechTarget
A bottleneck frequently arises from poor network or storage fabric design. Mismatched hardware selection is a common cause. For example, if a workgroup...
Read more >Bottlenecks, Modularity, and the Neural Control of Behavior
The robustness of the network with respect to the activated neuron is calculated as the number of behaviors that are conserved, that is, ......
Read more >deep learning - What are "bottlenecks" in neural networks?
The bottleneck in a neural network is just a layer with fewer neurons than the layer below or above it. Having such a...
Read more >The Importance of Bottlenecks in Protein Networks - PLOS
For simplicity, we defined protein bottlenecks as the proteins with the highest betweenness; hubs, as the proteins with the highest degree (see ...
Read more >Neural Network Activation Quantization with Bitwise ... - arXiv
Given the constraint of a limited average code rate, the information bottleneck minimizes the rate-distortion for optimal activation ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@davidtvs Thanks, the tracing works fine now!
For future reference - the fix is now on the master branch