question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

example code lstm_seq2seq.py warns about non-serializable keywords when attempting to save a model

See original GitHub issue

Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on StackOverflow or join the Keras Slack channel and ask there instead of filing a GitHub issue.

Thank you!

  • [X ] Check that you are up-to-date with the master branch of Keras. You can update with: pip install git+git://github.com/keras-team/keras.git --upgrade --no-deps

  • [ X] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.

  • If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with: pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps

  • [ X] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short). https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py

Basic Issue: I’m running on TensorFlow

In the Keras example linked above, the file lstm_seq2seq.py generates an error.

line 153: model.save('s2s.h5') returns

2379: UserWarning: Layer lstm_2 was passed non-serializable keyword arguments: {'initial_state': [<tf.Tensor 'lstm_1/while/Exit_2:0' shape=(?, 256) dtype=float32>, <tf.Tensor 'lstm_1/while/Exit_3:0' shape=(?, 256) dtype=float32>]}. They will not be included in the serialized model (and thus will be missing at deserialization time).
  str(node.arguments) + '. They will not be included '

Although this is phrased as a warning not an error , the result seems to be that the saved model is missing required information.

I’ve successfully saved other models in the past - so its something specific to this model.

Other information I’ve tried breaking up the model and saving the weights and config separately (see below) but model.get_weights() returns the same error.

# alternative method to save model by breaking it up into weights and config
import os
import pickle
def save_model(model, MODEL_DIR):
  if not os.path.isdir(MODEL_DIR):
    os.makedirs(MODEL_DIR)
  weights = model.get_weights()
  with open(os.path.join(MODEL_DIR ,'model'),'wb') as file_:
    pickle.dump(weights[1:], file_)
  with open(os.path.join(MODEL_DIR, 'config.json'),'w') as file_:
    file_.write(model.to_json())
    
save_model(model,'model_dir')

I tried to look into how model.get_weights() is implemented. Its just a loop that calls layer.get_weights() for each layer of the model.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:19
  • Comments:32

github_iconTop GitHub Comments

10reactions
chrispyTcommented, Aug 8, 2018

@microdave There are two versions of the encoder/decoder constructor; the one at https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py (as linked to by OP) only works if you have just trained the model, because it relies on already having encoder_inputs and encoder_states defined when it assigns: encoder_model = Model(encoder_inputs, encoder_states)

It has these because encoder_inputs and encoder_states are defined during model setup. The other version at https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq_restore.py is needed if you are reloading the model: it dissects the layers of the loaded model and picks out the bits it needs to reconstruct everything. E.g. it precedes the above line with

encoder_inputs = model.input[0]   # input_1
encoder_outputs, state_h_enc, state_c_enc = model.layers[2].output   # lstm_1
encoder_states = [state_h_enc, state_c_enc]

I had the same experience as you until I realised this second version was needed, so hopefully this will fix your issue.

4reactions
KinWaiCheukcommented, May 18, 2018

I am also getting the same “warning”. Any solution to this problem yet?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Saving Keras model - UserWarning: Layer XX was passed ...
python - Saving Keras model - UserWarning: Layer XX was passed non-serializable keyword arguments - Stack Overflow. Stack Overflow for Teams – ...
Read more >
Discord PY - How to code a Warning System! - YouTube
discordpy # python #tutorialWelcome to the twelfth episode of my Discord PY Tutorial Series where I go through and explain how to create...
Read more >
Discord.py Bot Tutorial - Warn Command (Episode #27.1)
Share. Save. MenuDocs. MenuDocs. 5.44K subscribers. Subscribe. WOW! Another cool discord. py video :D Read description for links!
Read more >
Error Handling in Discord.py | Part 5 - YouTube
In this video, we will learn how to handle errors in our bot coded in python using discord. py v1.0.1 (rewrite) in 2020....
Read more >
Frequently Asked Questions - discord.py
If logging is enabled, this library will attempt to warn you that blocking is occurring with the message: Heartbeat blocked for more than...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found