question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

LSTM: How to feed the output back to the input?

See original GitHub issue

model = Sequential() model.add(LSTM(512, input_dim = 4, return_sequences = True)) model.add(TimeDistributed(Dense(4))) model.add(Activation('softmax'))

The input here is the one hot representation of a string and the dictionary size is set to be 4. In other word, there are four types of chars in this string. The output here is the probabilities that the next char ought to be.

If the length of input sequence is 1, the output dimension is 4 by 1. I just wonder could I feed the output back to the input and get an arbitrary length of output sequence (illustrated as follows). It may not be reasonable to plug back the probabilities but I just want to know the possibility to implement this one-to-many structure in keras. Thanks.

Example:

input1 -(LSTM)-> output1 output1 -(LSTM) -> output2 output2 - (LSTM) -> output3

We could get a 4 by 3 output in the end.

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Reactions:4
  • Comments:12 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
wgmaocommented, Dec 2, 2019

I referred to https://github.com/LantaoYu/SeqGAN/blob/e2b52fb6309851b14765290e8a972ccac09f1bec/target_lstm.py to write customized recurrent layers.

2reactions
EderSantanacommented, Oct 16, 2016

actually, I think he will have to write his own custom layer to do that. See this DreamyRNN for example: https://github.com/commaai/research/blob/master/models/layers.py#L334-L397 It takes a n frames and input and outputs n+m where the last m frames are generated by feeding outputs back as input.

Read more comments on GitHub >

github_iconTop Results From Across the Web

keras - How to feed the output back to the input in LSTM?
I just wonder could I feed the output back to the input and get an arbitrary length of output sequence (illustrated as follows)....
Read more >
LSTM: Taking previous output values as feature
Take a new input from you (for example, as a one-hot encoded vector); Internally fetch a separate vector which is the LSTM's output...
Read more >
How to feed output of LSTM into itself? : r/pytorch - Reddit
It has a for loop inside itself with 'seq_len' range which exactly do the thing you want to do, feed its output as...
Read more >
How to Reshape Input Data for Long Short-Term Memory ...
In this tutorial, you will discover how to define the input layer to LSTM models and how to reshape your loaded input data...
Read more >
Understanding Recurrent Neural Network (RNN) and Long ...
Backward Propagation:Back propagation method is used to train neural networks. ... Output from previous step is fed as input to the current step...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found