question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Freeze a model to serve within API

See original GitHub issue

Hi.

I successfully tested a portuguese corpus I prepared and trained ( change line in utils.py for word in word_tokenize(sentence, language='portuguese'): ).

I’d like to have a frozen model in a single .pb file in order to serve within an API. I tried several approaches, like this: https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc

But unsuccessfully.

Would you consider providing some method to export a saved model? Or point me to the right direction?

Thanks!

Issue Analytics

  • State:open
  • Created 5 years ago
  • Reactions:1
  • Comments:15 (8 by maintainers)

github_iconTop GitHub Comments

2reactions
gifflarncommented, Sep 12, 2018

@PauloQuerido I too am trying to get a frozen graph to work. I got the .pb file from the link you posted using his freeze_graph function with output_node_names=decoder/decoder/transpose_1 I am now stuck on using the frozen graph, since importing the graph yields me “You must feed a value to tensor, Placeholder_2 and Placeholder_3” which are tensors used in training (I think). It’s weird because in test.py, running model.prediction with only three fed tensors works, but when frozen the model does not like me only using those three. If you are able to progress further than this, please hear me out

1reaction
gifflarncommented, Sep 21, 2018

@gogasca From my understanding you only specify the last layer(s) from the graph as output nodes, ‘freezing’ everything between input and output node. I only specified decoder/decoder/transpose_1 as output node. And I hoped I could get it to work like this, without success output = graph.get_tensor_by_name('prefix/decoder/decoder/transpose_1:0') input1 = graph.get_tensor_by_name('prefix/batch_size:0') input2 = graph.get_tensor_by_name('prefix/Placeholder:0') input3 = graph.get_tensor_by_name('prefix/Placeholder_1:0')

prediction = self.sess.run(output, feed_dict={ input1: len(batch), input2: batch, input3: batch_x_len})

Read more comments on GitHub >

github_iconTop Results From Across the Web

TensorFlow: How to freeze a model and serve it with a python ...
TensorFlow: How to freeze a model and serve it with a python API. We are going to explore two parts of using an...
Read more >
Freezing a Keras model - Towards Data Science
Once you have designed a network using Keras, you may want to serve it in another API, on the web, or other medium....
Read more >
How to freeze a model and serve it with a python API
There is a better way, see [1]. Convert your variable tensors into constant tensors and you are all set. you can just store...
Read more >
TensorFlow: How to export, freeze models with python API ...
Step #2: Freezing a graph converts your variable tensors into constant tensors and it will combine the graph structure with the values from ......
Read more >
I want to freeze a model to change the API from Python to C++
frozen_graph = freeze_session(sess,output_names=[out.op.name for out in model.outputs]). this error appears: Keras symbolic inputs/outputs ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found