question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Tensorflow Serving graph export

See original GitHub issue

After using the freeze_graph.py script, you get a single pb file, which doesn’t have some required signature/tags needed for use in TFS. For that purpose we created this little script that may be helpful for others. With the output from this script you can use the saved_model_cli utility to build a TFS client for this.

This was tested on python 3.6.4. tensorflow 1.10.1

import argparse
import sys

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model.signature_def_utils_impl import predict_signature_def
from tensorflow.gfile import GFile
from tensorflow import GraphDef, Graph, import_graph_def, Session


def main(args):
    with GFile(args.frozen_model_path, "rb") as f:
        graph_def = GraphDef()
        graph_def.ParseFromString(f.read())

    with Session() as sess:
        # Then, we import the graph_def into a new Graph and returns it
        with Graph().as_default() as graph:
            import_graph_def(graph_def, name='')
            signature = predict_signature_def(
                inputs={'image_batch': graph.get_tensor_by_name('image_batch:0'),
                        'phase_train': graph.get_tensor_by_name('phase_train:0')},
                outputs={'embeddings': graph.get_tensor_by_name('embeddings:0')}
            )

            builder = saved_model_builder.SavedModelBuilder(args.output_model_dir)
            builder.add_meta_graph_and_variables(
                sess=sess,
                tags=[tag_constants.SERVING],
                signature_def_map={'serving_default': signature}
            )
            builder.save()


def parse_arguments(argv):
    parser = argparse.ArgumentParser()
    parser.add_argument('frozen_model_path', type=str,
                        help='Frozen model path.')
    parser.add_argument('output_model_dir', type=str,
                        help='Filename for the exported graphdef protobuf (.pb)')
    return parser.parse_args(argv)


if __name__ == '__main__':
    main(parse_arguments(sys.argv[1:]))

Issue Analytics

  • State:open
  • Created 4 years ago
  • Reactions:1
  • Comments:6

github_iconTop GitHub Comments

1reaction
ajinkya933commented, Mar 30, 2020

@bmachin After some elaborate efforts I got the output embedding array using tf serving. The output of my saved_model_cli show --all --dir <path_to_your_model_dir>/<model_version> (path_to_your_model is the full path not the relative path)

is as follows:

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['image_batch'] tensor_info:
        dtype: DT_FLOAT
        shape: unknown_rank
        name: image_batch:0
    inputs['phase_train'] tensor_info:
        dtype: DT_BOOL
        shape: unknown_rank
        name: phase_train:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['embeddings'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 512)
        name: embeddings:0
  Method name is: tensorflow/serving/predict

and my tf_client script (serving_tf_docker_client.py ) is as follows:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from scipy import misc
import tensorflow as tf
import numpy as np
import sys
import os
import copy
import argparse
import facenet
import align.detect_face
import grpc
import math
import numpy as np
import tensorflow as tf
from tensorflow_serving.apis import prediction_service_pb2_grpc
from tensorflow_serving.apis import predict_pb2

class serving_class:

    def load_and_align_data(self, image_paths, image_size, margin, gpu_memory_fraction):

        minsize = 20  # minimum size of face
        threshold = [0.6, 0.7, 0.7]  # three steps's threshold
        factor = 0.709  # scale factor

        print('Creating networks and loading parameters')
        with tf.Graph().as_default():
            gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_memory_fraction)
            sess = tf.Session(config=tf.ConfigProto(
                gpu_options=gpu_options, log_device_placement=False))
            with sess.as_default():
                pnet, rnet, onet = align.detect_face.create_mtcnn(sess, None)

        tmp_image_paths = copy.copy(image_paths)
        img_list = []
        for image in tmp_image_paths:
            img = misc.imread(os.path.expanduser(image), mode='RGB')
            img_size = np.asarray(img.shape)[0:2]
            bounding_boxes, _ = align.detect_face.detect_face(
                img, minsize, pnet, rnet, onet, threshold, factor)
            if len(bounding_boxes) < 1:
                image_paths.remove(image)
                print("can't detect face, remove ", image)
                continue
            det = np.squeeze(bounding_boxes[0, 0:4])
            bb = np.zeros(4, dtype=np.int32)
            bb[0] = np.maximum(det[0]-margin/2, 0)
            bb[1] = np.maximum(det[1]-margin/2, 0)
            bb[2] = np.minimum(det[2]+margin/2, img_size[1])
            bb[3] = np.minimum(det[3]+margin/2, img_size[0])
            cropped = img[bb[1]:bb[3], bb[0]:bb[2], :]
            aligned = misc.imresize(cropped, (image_size, image_size), interp='bilinear')
            prewhitened = facenet.prewhiten(aligned)
            img_list.append(prewhitened)
        images = np.stack(img_list)
        return images

    def get_embeddings(self, images_paths, batch_size):

        channel = grpc.insecure_channel('localhost:8500')  # localhost:8500 in your case
        stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)

        request = predict_pb2.PredictRequest()
        request.model_spec.name = 'facenet'  # get this from saved_model_cli
        request.model_spec.signature_name = 'serving_default'  # get this from saved_model_cli

        # Run forward pass to calculate embeddings
        nrof_images = len(images_paths)
        nrof_batches_per_epoch = int(math.ceil(nrof_images / batch_size))
        emb_array = np.zeros((nrof_images, 512))
        for i in range(nrof_batches_per_epoch):
            start_index = i * batch_size
            end_index = min((i + 1) * batch_size, nrof_images)
            paths_batch = images_paths[start_index:end_index]
            # images = self.load_data(paths_batch, 160, ,)
            images = self.load_and_align_data(
                paths_batch, image_size=160, margin=44, gpu_memory_fraction=1.0)

            request.inputs['image_batch'].CopyFrom(
                tf.contrib.util.make_tensor_proto(images, shape=images.shape, dtype=tf.float32))
            request.inputs['phase_train'].CopyFrom(tf.contrib.util.make_tensor_proto(False))
            result = stub.Predict(request, 10.0)  # 10 secs timeout
            np_res = np.array(result.outputs['embeddings'].float_val).reshape(
                [len(paths_batch), 512])
            emb_array[start_index:end_index, :] = np_res
        return emb_array


img_list = ['data/images/test/ajinkya_open34.jpg', 'data/images/test/ajinkya_open35.jpg']

object1 = serving_class()
emb = object1.get_embeddings(img_list, 1)

print(emb)

I used facenet/models.config to find request.model_spec.name = facenet

When I run python3 src/serving_tf_docker_client.py

I get

__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Creating networks and loading parameters
2020-01-31 17:34:33.484882: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
WARNING:tensorflow:From /home/serveradmin/anaconda3/envs/tensorflow1.7_p35_facenet_env/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:

Use the retry module or similar alternatives.
Creating networks and loading parameters

[[-0.03109206  0.0125307  -0.04330589 ... -0.04297607  0.04209613
   0.04481008]
 [-0.00280518  0.03722835 -0.03977793 ... -0.02755948  0.02463386
  -0.02246956]]
0reactions
zh3389commented, Dec 7, 2020

thanks, it is good.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Serving a TensorFlow Model | TFX
This tutorial shows you how to use TensorFlow Serving components to export a trained TensorFlow model and use the standard ...
Read more >
Tensorflow: exporting model for serving | by Bao Nguyen
The requirement for an exported model to be servable by TFServing is quite simple: you need to define inputs and outputs named signatures....
Read more >
2. Exporting and deploying a model — IPU TensorFlow ...
The Graphcore distribution of TensorFlow allows you to export a precompiled model ... After the model is trained, it can be exported for...
Read more >
How to deploy TensorFlow models to production using TF ...
However, the TensorFlow Serving Python API is only published for Python 2. Therefore, to export the model and run TF serving, we use...
Read more >
How do I export a graph to Tensorflow Serving so that the ...
I have a Keras graph with a float32 tensor of shape (?, 224, 224, 3) that I want to export to Tensorflow Serving,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found