Need advisory about interconnected layers and models (Autoencoder)
See original GitHub issueJust done this guide -> https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html and the model saved successfully. Now i want to separate training phase from prediction. I know its possible to load a model again using load_model() function. In this tutorial you got on lstm for encoding, one for decoding, how to get the encoder/decoder again after successfully load the model?
In addition how to solve such a problem:
/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py:2344: UserWarning: Layer lstm_2 was passed non-serializable keyword arguments: {'initial_state': [<tf.Tensor 'lstm_1/while/Exit_2:0' shape=(?, 366) dtype=float32>, <tf.Tensor 'lstm_1/while/Exit_3:0' shape=(?, 366) dtype=float32>]}. They will not be included in the serialized model (and thus will be missing at deserialization time).
So far i know it is necessary to have these weights/connection between encoder and decoder. Any idea?
Issue Analytics
- State:
- Created 6 years ago
- Reactions:3
- Comments:19
Hey guys, having the same problem! The fix mentioned above doesn’t work with me. I’m using Keras 2.1.4 and implementing the same autoencoder pattern as described above. However loading my model gives completely different results. I hope a bugfix will arive soon as for now I can’t use my work
Hi all,
I have the same issue here. Could someone reopen the issue? Thanks for any advice for suggestions?