Implementing LSTM based sequence to sequence autoencoder
See original GitHub issueI’m working on reconstructing a 10 timesteps sequence of 32 features.
Here is my Keras model
inputs = Input(shape=(timesteps, dimension))
encoded = LSTM(8)(inputs)
decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(dimension, return_sequences=True)(decoded)
sequence_autoencoder = Model(inputs, decoded)
sequence_autoencoder.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 10, 32) 0
_________________________________________________________________
lstm_1 (LSTM) (None, 8) 1312
_________________________________________________________________
repeat_vector_1 (RepeatVecto (None, 10, 8) 0
_________________________________________________________________
lstm_2 (LSTM) (None, 10, 32) 5248
=================================================================
Total params: 6,560
Trainable params: 6,560
Non-trainable params: 0
My dataset is a pyspark dataframe. Each row fave a features column as a wrapped array (10, 32). I guess I need to have wrapped arrays in input and output. Does elephas support this?
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (1 by maintainers)
Top Results From Across the Web
A Gentle Introduction to LSTM Autoencoders
An LSTM Autoencoder is an implementation of an autoencoder for sequence data using an Encoder-Decoder LSTM architecture.
Read more >LSTM-AutoEncoders. Understand and perform Composite
Simple LSTM Autoencoder ... To reconstruct each input sequence. First, we will import all the required libraries. ... Next, we will define the...
Read more >Step-by-step understanding LSTM Autoencoder layers
We are using return_sequences=True in all the LSTM layers. That means, each layer is outputting a 2D array containing each timesteps. Thus, ...
Read more >Introduction to LSTM Autoencoder Using Keras
LSTM autoencoder is an encoder that is used to compress data using an encoder and decode it to retain original structure using a...
Read more >A ten-minute introduction to sequence-to ... - The Keras Blog
I see this question a lot -- how to implement RNN sequence-to-sequence learning in Keras? Here is a short introduction.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
This is currently not supported - I can look into it if it’s something that would benefit a lot of users? I would definitely want some input and assistance, as I do not know what the best way to implement this is.
Moved this issue to the new fork: https://github.com/danielenricocahall/elephas/issues/10. Closing this for now but still on the radar!