[help] Constructing a synced sequence input and output RNN
See original GitHub issueHi there,
I’m building an RNN to assign an output label for each input element in the sequence for activity recognition based on location. In this toy model, the shape of each input location is 4x1; the shape of each output activity is 3x1. There are two hidden layers, the shape of each hidden component is 3x1.
My question is how to construct the model? Do I need to use Embedding layer? Should I use two layers of TimeDistributedDense or two layers of GRU/LSTM for my two hidden layers?
Please help and I hope I could contribute an example to the repo 😃
My code snippet is shown below.
input_dim = 4
output_dim = 3
hidden_dim = 3
print('Build model...')
model = Sequential()
# TODO: add layers to model
print('Compile model...')
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
model.fit(X_train, Y_train, batch_size=1, nb_epoch=10)
print('Done')
Issue Analytics
- State:
- Created 8 years ago
- Comments:9 (2 by maintainers)
Top Results From Across the Web
Understanding the Mechanism and Types of Recurrent Neural ...
Many-to-many (synced) RNNs As you can see, each output is calculated based on its corresponding input and all the previous outputs. One common ......
Read more >Recurrent Neural Network (RNN) Tutorial: Types, Examples ...
This RNN takes a sequence of inputs and generates a single output. Sentiment analysis is a good example of this kind of network...
Read more >Input-output schemes of RNN. | Download Scientific Diagram
... an RNN typically has one of the following input-output schemes: sequence-to-one, sequence-to-sequence, synced sequence- to-sequence, as shown in Fig. 3.
Read more >An Introduction to Recurrent Neural Networks - Experfy Insights
Sequence input and sequence output (e.g. Machine Translation: an RNN reads a sentence in English and then outputs a sentence in French). Synced...
Read more >Introduction to RNN and LSTM - The AI dream
A Recurrent Neural Network with input X and output Y with multiple recurrent steps and a hidden unit. ... Synced sequence input and...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
No, TimeDistrubutedDense is exactly as it sounds, simply a Dense layer that feed all of its inputs forward in time; this distinction between Dense and TimeDistributedDense is simply that a Dense layer expects 2D input (batch_size, sample_size) whereas TimeDistributedDense expects 3D input (Batch_size, time_steps, sample_size). This should be used in conjunction with TimeDistributedSoftmax for the same reason (2D vs. 3D expected input).
There is a GRU layer, however: https://github.com/fchollet/keras/blob/master/keras/layers/recurrent.py#L156-253