Output Layer Type
See original GitHub issueI’m working on implementing a beam decoder for this, and just realized that the output values do not appear to be posteriors. In model.py
I see the output layer is a Linear
layer. Why not a Softmax or LogSoftmax activation? I suppose such a layer is not strictly necessary when doing a Greedy decode (and less efficient), but will be necessary for more complicated decoders. Just wondering if there’s a specific reason or if there’s something I’m missing.
Thanks!
Issue Analytics
- State:
- Created 6 years ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
List of Deep Learning Layers - MATLAB & Simulink - MathWorks
Deep Learning Layers · Input Layers · Convolution and Fully Connected Layers · Sequence Layers · Activation Layers · Normalization Layers · Utility...
Read more >Configuring a Neural Network Output Layer | Enthought, Inc.
If your data has a target that resides in a single vector, the number of output nodes in your neural network will be...
Read more >What is an Output Layer? - Definition from Techopedia
A typical traditional neural network has three types of layers: one or more input layers, one or more hidden layers, and one or...
Read more >Four Common Types of Neural Network Layers
The four most common types of neural network layers are Fully connected, Convolution, Deconvolution, and Recurrent, and below you will find what ...
Read more >The output layer - Deep Learning: Getting Started - LinkedIn
- The output layer is the final layer in the neural network where desired predictions are obtained. There is one output layer in...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Agreed. This can be closed I think – perhaps adding a note to the README/wiki in the future just to highlight that there is no softmax activation on the output layer would be prudent.
Added disclaimer, thanks Ryan!