question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Implementing LSTM based sequence to sequence autoencoder

See original GitHub issue

I’m working on reconstructing a 10 timesteps sequence of 32 features.

Here is my Keras model

inputs = Input(shape=(timesteps, dimension))
encoded = LSTM(8)(inputs)
decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(dimension, return_sequences=True)(decoded)
sequence_autoencoder = Model(inputs, decoded)
sequence_autoencoder.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 10, 32)            0         
_________________________________________________________________
lstm_1 (LSTM)                (None, 8)                 1312      
_________________________________________________________________
repeat_vector_1 (RepeatVecto (None, 10, 8)             0         
_________________________________________________________________
lstm_2 (LSTM)                (None, 10, 32)            5248      
=================================================================
Total params: 6,560
Trainable params: 6,560
Non-trainable params: 0

My dataset is a pyspark dataframe. Each row fave a features column as a wrapped array (10, 32). I guess I need to have wrapped arrays in input and output. Does elephas support this?

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:6 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
danielenricocahallcommented, Jan 19, 2021

This is currently not supported - I can look into it if it’s something that would benefit a lot of users? I would definitely want some input and assistance, as I do not know what the best way to implement this is.

0reactions
danielenricocahallcommented, Oct 11, 2022

Moved this issue to the new fork: https://github.com/danielenricocahall/elephas/issues/10. Closing this for now but still on the radar!

Read more comments on GitHub >

github_iconTop Results From Across the Web

A Gentle Introduction to LSTM Autoencoders
An LSTM Autoencoder is an implementation of an autoencoder for sequence data using an Encoder-Decoder LSTM architecture.
Read more >
LSTM-AutoEncoders. Understand and perform Composite
Simple LSTM Autoencoder ... To reconstruct each input sequence. First, we will import all the required libraries. ... Next, we will define the...
Read more >
Step-by-step understanding LSTM Autoencoder layers
We are using return_sequences=True in all the LSTM layers. That means, each layer is outputting a 2D array containing each timesteps. Thus, ...
Read more >
Introduction to LSTM Autoencoder Using Keras
LSTM autoencoder is an encoder that is used to compress data using an encoder and decode it to retain original structure using a...
Read more >
A ten-minute introduction to sequence-to ... - The Keras Blog
I see this question a lot -- how to implement RNN sequence-to-sequence learning in Keras? Here is a short introduction.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found