question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to export model for Tensorflow Serving?

See original GitHub issue

Follow some tutorials of exporting model for Tensorflow Serving, I’ve come up below trial:

cfg_from_file('ctpn/text.yml')
config = tf.ConfigProto(allow_soft_placement=True)
with tf.Session(config=config) as sess:
        net = get_network("VGGnet_test")

        saver = tf.train.Saver()
        try:
            ckpt = tf.train.get_checkpoint_state(cfg.TEST.checkpoints_path)
            saver.restore(sess, ckpt.model_checkpoint_path)
        except:
            raise 'Missing pre-trained model: {}'.format(ckpt.model_checkpoint_path)

       # The main export trial is here
       #############################
        export_path = os.path.join(
            tf.compat.as_bytes('/tmp/ctpn'),
            tf.compat.as_bytes(str(1)))
        builder = tf.saved_model.builder.SavedModelBuilder(export_path)

        freezing_graph = sess.graph
        prediction_signature = tf.saved_model.signature_def_utils.predict_signature_def(
            inputs={'input': freezing_graph.get_tensor_by_name('Placeholder:0')},
            outputs={'output': freezing_graph.get_tensor_by_name('Placeholder_1:0')}
        )

        builder.add_meta_graph_and_variables(
            sess,
            [tf.saved_model.tag_constants.SERVING],
            signature_def_map={
                tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: prediction_signature
            },
            clear_devices=True)

        builder.save()
        print('[INFO] Export SavedModel into {}'.format(export_path))
        #############################

With output of freezing_graph is:

Tensor("Placeholder:0", shape=(?, ?, ?, 3), dtype=float32)
Tensor("conv5_3/conv5_3:0", shape=(?, ?, ?, 512), dtype=float32)
Tensor("rpn_conv/3x3/rpn_conv/3x3:0", shape=(?, ?, ?, 512), dtype=float32)
Tensor("lstm_o/Reshape_2:0", shape=(?, ?, ?, 512), dtype=float32)
Tensor("lstm_o/Reshape_2:0", shape=(?, ?, ?, 512), dtype=float32)
Tensor("rpn_cls_score/Reshape_1:0", shape=(?, ?, ?, 20), dtype=float32)
Tensor("rpn_cls_prob:0", shape=(?, ?, ?, ?), dtype=float32)
Tensor("Reshape_2:0", shape=(?, ?, ?, 20), dtype=float32)
Tensor("rpn_bbox_pred/Reshape_1:0", shape=(?, ?, ?, 40), dtype=float32)
Tensor("Placeholder_1:0", shape=(?, 3), dtype=float32)

After I got /tmp/ctpn/1 exported model, I try to load into Tensorflow Serving server:

tensorflow_model_server --port=9000 --model_name=ctpn --model_base_path=/tmp/ctpn

But it came up an error:

Loading servable: {name: detector version: 1} failed: Not found: Op type not registered 'PyFunc' in binary running on [...]. Make sure the Op and Kernel are registered in the binary running in this process.

So there are 2 questions:

  • Am I right about the inputs (Placeholder:0) and the outputs (Placeholder_1:0) of prediction_signature
  • Where do I miss PyFunc?

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:20 (1 by maintainers)

github_iconTop GitHub Comments

5reactions
amit2319commented, Oct 8, 2018

@MrOCR I resolved it. Just remove the

(self.feed('rpn_cls_prob_reshape', 'rpn_bbox_pred', 'im_info') .proposal_layer(_feat_stride, anchor_scales, 'TEST', name='rois'))

from text-detection-ctpn/lib/networks/VGGnet_test.py

It will exclude the proposal_layer_py layer.

5reactions
eragonruancommented, Jun 26, 2018

@hiepph I have release the pb file. enjoy it

Read more comments on GitHub >

github_iconTop Results From Across the Web

Serving a TensorFlow Model | TFX
This tutorial shows you how to use TensorFlow Serving components to export a trained TensorFlow model and use the standard ...
Read more >
Tensorflow: exporting model for serving | by Bao Nguyen
The requirement for an exported model to be servable by TFServing is quite simple: you need to define inputs and outputs named signatures....
Read more >
2. Exporting and deploying a model — IPU TensorFlow ...
There are multiple ways to export a model from TensorFlow to the SavedModel format so that it can be used in TensorFlow Serving....
Read more >
How to deploy Machine Learning models with TensorFlow ...
Basically, there are three steps — export your model for serving, create a Docker container with your model and deploy it with Kubernetes...
Read more >
Export nmt trained model to tensorflow serving #712 - GitHub
I have trained nmt models, but I can not understand how to export the models to tensorflow serving. I read the documentations of...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found