question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Tensorflow model does not work on inference

See original GitHub issue

Description I have a Tensorflow model, saved as ‘savedmodel’. Trying to deploy it with triton server and tensorflow_savedmodel platform. Triton server has loaded, my model is loaded, but when I try to use it and send a test image from client - I get an error:

    raise get_error_grpc(rpc_error) from None
tritonclient.utils.InferenceServerException: [StatusCode.INVALID_ARGUMENT] unexpected inference output 'output' for model 'efnet'

I have no Idea how to troubleshoot this, can’t find anything in documentation. Please give me some hints on where to look for an issue

Triton Information tritonserver2.19.0-jetpack4.6.1

Are you using the Triton container or did you build it yourself? built

To Reproduce config.pbtxt:

name: "efnet"
platform: "tensorflow_savedmodel"
max_batch_size: 1
input [
  {
    name: "input"
    data_type: TYPE_FP32
    dims: [ 3, 224, 224 ]
  }
]
output [
  {
    name: "output"
    data_type: TYPE_FP32
    dims: [ 5 ]
  }
]

client:

import tritonclient.grpc as grpcclient
import numpy as np
import cv2
import sys


class Efnet_grpc():
    def __init__(self,
                 url="localhost:8001",
                 model_name="efnet",
                 input_width=224,
                 input_height=224,
                 model_version="",
                 verbose=False, conf_thresh=0.7) -> None:

        self.model_name = model_name
        self.input_width = input_width
        self.input_height = input_height
        self.batch_size = 1
        self.conf_thresh = conf_thresh
        self.input_shape = [self.batch_size, 3, self.input_height, self.input_width]
        self.input_name = 'input'
        self.output_name = 'output'
        self.output_size = 5
        self.triton_client = None
        self.init_triton_client(url)
        print('test_pred')
        self.test_predict()
        print('test_pred_done')


    def init_triton_client(self, url):
        try:
            triton_client = grpcclient.InferenceServerClient(
                url=url,
                verbose=False,
                ssl=False,
            )
        except Exception as e:
            print("channel creation failed: " + str(e))
            sys.exit()
        self.triton_client = triton_client


    def test_predict(self):
        input_images = np.zeros((*self.input_shape,), dtype=np.float32)
        _ = self.predict(input_images)


    def predict(self, input_images):
        inputs = []
        outputs = []

        inputs.append(grpcclient.InferInput(self.input_name, [*input_images.shape], "FP32"))
        # Initialize the data
        inputs[-1].set_data_from_numpy(input_images)
        outputs.append(grpcclient.InferRequestedOutput(self.output_name))

        # Test with outputs
        results = self.triton_client.infer(
            model_name=self.model_name,
            inputs=inputs,
            outputs=outputs)

        # Get the output arrays from the results
        return results.as_numpy(self.output_name)


    def preprocessing(self, image):
        image = cv2.resize(image, dsize=(self.input_width, self.input_height), interpolation=cv2.INTER_CUBIC)
        final_image = np.expand_dims(image, axis=0)
        return final_image


    def classifier_pred(self, image):
        proc_image = self.preprocessing(image)
        pred = self.predict(proc_image)
        print(pred)

As you can see, I have ‘input’ and ‘output’ both in client and config file. What’s wrong here?

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:9 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
dyastremskycommented, Aug 30, 2022

Those numbers can be int8, but it depends on what the model’s input types are. If a TensorFlow input was generated to be DT_FLOAT, then Triton will expect it to be FP32.

0reactions
dyastremskycommented, Sep 2, 2022

Fantastic, any time!

Read more comments on GitHub >

github_iconTop Results From Across the Web

TensorFlow Inference - python - Stack Overflow
Here is the problem: I have a device which occasionally checks for updated models. It then needs to load that model and run...
Read more >
TensorFlow Lite inference
The term inference refers to the process of executing a TensorFlow Lite model on-device in order to make predictions based on input data....
Read more >
A Problem that Only One Inference Can Be Made ... - GitHub
Continuous inference is possible by reloading the model before every inference, but it is very slow. This problem does not occur in previous ......
Read more >
Model inference using TensorFlow Keras API
This notebook demonstrates how to do distributed model inference using TensorFlow with ResNet-50 model and a Parquet file as input data.
Read more >
TensorFlow 2.0 model inference not working as expected
Hello, I am trying to do inference with a compiled mnist model. The model has been built with Keras Functional API and with...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found