question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Need advisory about interconnected layers and models (Autoencoder)

See original GitHub issue

Just done this guide -> https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html and the model saved successfully. Now i want to separate training phase from prediction. I know its possible to load a model again using load_model() function. In this tutorial you got on lstm for encoding, one for decoding, how to get the encoder/decoder again after successfully load the model?

In addition how to solve such a problem:

/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py:2344: UserWarning: Layer lstm_2 was passed non-serializable keyword arguments: {'initial_state': [<tf.Tensor 'lstm_1/while/Exit_2:0' shape=(?, 366) dtype=float32>, <tf.Tensor 'lstm_1/while/Exit_3:0' shape=(?, 366) dtype=float32>]}. They will not be included in the serialized model (and thus will be missing at deserialization time).

So far i know it is necessary to have these weights/connection between encoder and decoder. Any idea?

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Reactions:3
  • Comments:19

github_iconTop GitHub Comments

3reactions
Woekikicommented, Mar 8, 2018

Hey guys, having the same problem! The fix mentioned above doesn’t work with me. I’m using Keras 2.1.4 and implementing the same autoencoder pattern as described above. However loading my model gives completely different results. I hope a bugfix will arive soon as for now I can’t use my work

2reactions
guidesccommented, Apr 13, 2018

Hi all,

I have the same issue here. Could someone reopen the issue? Thanks for any advice for suggestions?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Applied Deep Learning - Part 3: Autoencoders | by Arden Dertat
Both the encoder and decoder are fully-connected feedforward neural networks, essentially the ANNs we covered in Part 1. Code is a single layer...
Read more >
Neural Networks: What are they and why do they matter? - SAS
Neural networks are computing systems with interconnected nodes that work much like neurons in the human brain. Using algorithms, they can recognize hidden ......
Read more >
Artificial Neural Networks Applications and Algorithms
Artificial Neural Networks Applications, Architecture and algorithms to perform Pattern Recognition, Fraud Detection with Deep Learning.
Read more >
Artificial neural network - Wikipedia
An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological...
Read more >
Advanced Model Architectures | Chan`s Jupyter
You will create an autoencoder to reconstruct noisy images, visualize convolutional neural network activations, use deep pre-trained models to ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found