Tensorflow Serving graph export
See original GitHub issueAfter using the freeze_graph.py
script, you get a single pb file, which doesn’t have some required signature/tags needed for use in TFS. For that purpose we created this little script that may be helpful for others. With the output from this script you can use the saved_model_cli
utility to build a TFS client for this.
This was tested on python 3.6.4. tensorflow 1.10.1
import argparse
import sys
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model.signature_def_utils_impl import predict_signature_def
from tensorflow.gfile import GFile
from tensorflow import GraphDef, Graph, import_graph_def, Session
def main(args):
with GFile(args.frozen_model_path, "rb") as f:
graph_def = GraphDef()
graph_def.ParseFromString(f.read())
with Session() as sess:
# Then, we import the graph_def into a new Graph and returns it
with Graph().as_default() as graph:
import_graph_def(graph_def, name='')
signature = predict_signature_def(
inputs={'image_batch': graph.get_tensor_by_name('image_batch:0'),
'phase_train': graph.get_tensor_by_name('phase_train:0')},
outputs={'embeddings': graph.get_tensor_by_name('embeddings:0')}
)
builder = saved_model_builder.SavedModelBuilder(args.output_model_dir)
builder.add_meta_graph_and_variables(
sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={'serving_default': signature}
)
builder.save()
def parse_arguments(argv):
parser = argparse.ArgumentParser()
parser.add_argument('frozen_model_path', type=str,
help='Frozen model path.')
parser.add_argument('output_model_dir', type=str,
help='Filename for the exported graphdef protobuf (.pb)')
return parser.parse_args(argv)
if __name__ == '__main__':
main(parse_arguments(sys.argv[1:]))
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:6
Top Results From Across the Web
Serving a TensorFlow Model | TFX
This tutorial shows you how to use TensorFlow Serving components to export a trained TensorFlow model and use the standard ...
Read more >Tensorflow: exporting model for serving | by Bao Nguyen
The requirement for an exported model to be servable by TFServing is quite simple: you need to define inputs and outputs named signatures....
Read more >2. Exporting and deploying a model — IPU TensorFlow ...
The Graphcore distribution of TensorFlow allows you to export a precompiled model ... After the model is trained, it can be exported for...
Read more >How to deploy TensorFlow models to production using TF ...
However, the TensorFlow Serving Python API is only published for Python 2. Therefore, to export the model and run TF serving, we use...
Read more >How do I export a graph to Tensorflow Serving so that the ...
I have a Keras graph with a float32 tensor of shape (?, 224, 224, 3) that I want to export to Tensorflow Serving,...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@bmachin After some elaborate efforts I got the output embedding array using tf serving. The output of my
saved_model_cli show --all --dir <path_to_your_model_dir>/<model_version>
(path_to_your_model
is the full path not the relative path)is as follows:
and my tf_client script (serving_tf_docker_client.py ) is as follows:
I used facenet/models.config to find request.model_spec.name = facenet
When I run
python3 src/serving_tf_docker_client.py
I get
thanks, it is good.