single timestep prediction using LSTM
See original GitHub issueI am not sure how the prediction works with LSTM. I can correctly train a model using this code:
model.add(Masking(mask_value= -1.0, input_shape=(None, 5)))
model.add(LSTM(units=nr_units, return_sequences=True, activation='relu'))
model.add(Dropout(0.2))
model.add(TimeDistributed(Dense(6, activation='sigmoid')))
model.compile(loss="categorical_crossentropy", optimizer='adam', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=nr_epochs, batch_size=nr_batch, validation_split=0.2)
My question is: can I predict the output of every timestep in a sequence by providing only one timestep as input? I can’t do otherwise because only the first element of the sequence is known while the 2,…,n depends on output of the prediction.
I am making the prediction like this inside a loop:
pred = model.predict(x_test)
Where x_test is a single ‘frame’ of a sequence. Does the model retains its internal state like this? Or do I need to provide the full input sequence?
Thanks
Issue Analytics
- State:
- Created 6 years ago
- Comments:8
Top Results From Across the Web
Why LSTM still works with only 1 time step - Cross Validated
The only valid way to use LSTM is when you have multiple timepoints per each sample or use stateful=True where the LSTM passes...
Read more >How to Use Timesteps in LSTM Networks for Time Series ...
With one time step, you have no temporal correlation into the network. Hence LSTM cannot learn it. However, MLP and LSTM differ because...
Read more >Single and Multi-Step Temperature Time Series Forecasting ...
Single and Multi-Step Temperature Time Series Forecasting for Vilnius Using LSTM Deep Learning Model. Weather time series forecasting using deep ...
Read more >Time Series Prediction with LSTM
A simple recurrent neural network works well only for a short-term memory. We will see that it suffers from a fundamental problem (vanishing...
Read more >LSTM with different timestep when predicting - Stack Overflow
If you want a variable number of timesteps, simply set that size in the input shape to None ; that is, inputs =...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
To train in batch, but predict in sequence, you can make two models which are exactly the same, but have different ‘batch_input_shapes’.
After training the first model in batch, you transfer the weights to the second model to predict.
Make the training model with the first dimension in batch_input_shape as your training batch size. Then, make a new prediction model which is the same, except the first dimension in batch_input_shape is 1.
In both models, set stateful=True, and return_sequences=True.
You can then use model.reset_states to reset your recurrent states during the appropriate times during training or predicting.
Here’s some code that might help. https://github.com/mturnshek/deep-learning/blob/master/realtime_rnn_predictions/batch_train_realtime_predict.py
Supposed that past 7 days’ history data influences the current data. A model is trained and its timestep = 3. When predicting if one timestep of x1, x2, x3 is passed to the model, the result is not satisfying. @mturnshek hi, do you mean that the state that is produced by one predict call is used by the next call? If yes, for the targeted prediction, more sequences and more time steps are passed to the model and get more predictions although one the last prediction is targeted.