LSTM: How to feed the output back to the input?
See original GitHub issuemodel = Sequential()
model.add(LSTM(512, input_dim = 4, return_sequences = True))
model.add(TimeDistributed(Dense(4)))
model.add(Activation('softmax'))
The input here is the one hot representation of a string and the dictionary size is set to be 4. In other word, there are four types of chars in this string. The output here is the probabilities that the next char ought to be.
If the length of input sequence is 1, the output dimension is 4 by 1. I just wonder could I feed the output back to the input and get an arbitrary length of output sequence (illustrated as follows). It may not be reasonable to plug back the probabilities but I just want to know the possibility to implement this one-to-many structure in keras. Thanks.
Example:
input1 -(LSTM)-> output1
output1 -(LSTM) -> output2
output2 - (LSTM) -> output3
We could get a 4 by 3 output in the end.
Issue Analytics
- State:
- Created 7 years ago
- Reactions:4
- Comments:12 (1 by maintainers)
I referred to
https://github.com/LantaoYu/SeqGAN/blob/e2b52fb6309851b14765290e8a972ccac09f1bec/target_lstm.py
to write customized recurrent layers.actually, I think he will have to write his own custom layer to do that. See this
DreamyRNN
for example: https://github.com/commaai/research/blob/master/models/layers.py#L334-L397 It takes an
frames and input and outputsn+m
where the lastm
frames are generated by feeding outputs back as input.