question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Trouble sending image data to Triton Server

See original GitHub issue

Description I am trying to send an image to my model that is hosted on Triton Server. I have tried doing this in two ways. One, sending the image like this:

{
    "id": "1",
    "inputs": [
        {
            "name": "input_tensor",
            "datatype": "UINT8",
            "shape": [
                1,
                1,
                1,
                3
            ],
            "data": [--image data as bytes (type is unsigned int8)--]
        }
    ]
}

With this, I get this error: {"error":"attempt to access JSON non-unsigned-integer as unsigned-integer"}.

The other way I try to send is using the Binary Tensor Data Extension but I’ve gotten a few errors, one being: http_server.cc:2327 Infer failed: must specifiy valid 'Infer-Header-Content-Length' in request header and 'binary_data_size' when passing inputs in binary data format.

For the Binary Tensor Data Extension I am confused on how to set the call up properly because it says we need to send the request a an octet-stream but also include a JSON body. How are we to accomplish both of this in one call?

Triton Information What version of Triton are you using? 20.13

Are you using the Triton container or did you build it yourself? Built using Docker

To Reproduce Using Tensorflow 2 Object Detection API model which returns num_detections, detection_boxes, detection_scores, detection_classes, and image_tensor.

Expected behavior I am trying to successfully send image data to Triton server from my client project and get a response back.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:16 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
zet809commented, Nov 8, 2021

Hi, @gioipv , sorry for misleading you. The purpose of adding image size in hex into stuff_mug step, is just want to show you how the “binary_data_size” is got in postdata.json.

So, 0x0f5992 is image binary data size. And then, “binary_data_size”=image_data_size(0x0f5992 or 1005970) + length of data_size( 4), then that is 1005974.

As for why need to add hex data_size before binary data when append image data to post json data, may related to how binary data is processed inside HTTP server. I didn’t pay much attention to this part.

0reactions
gioipvcommented, Sep 16, 2021

hi, @mdable2 , here is my steps to send image data as binary format for your reference. But I still not figure out how to send image data as "data": [--image data as bytes (type is unsigned int8)--]

  • prepend image file size 1005970(0x0f5992)
# printf "\x00\x0f\x59\x92" | cat - ./images/mug.jpg > stuff_mug
# ls -l stuff_mug
-rw-r--r-- 1 root root 1005974 Feb  3 01:35 stuff_mug
  • prepare postdata json format, and append image size and image data to it:
# cat postdata.json
{"inputs":[{"name":"INPUT","shape":[1,1],"datatype":"BYTES","parameters":{"binary_data_size":<size of file stuff_mug, 1005974 here>}}],"outputs":[{"name":"OUTPUT","parameters":{"classification":3,"binary_data":true}}]}
# ls -l postdata.json
-rw-r--r-- 1 root root 188 Feb  3 01:35 postdata.json
# printf "\x00\x0f\x59\x92" | cat - ./images/mug.jpg >> postdata.json
# ls -l postdata.json
-rw-r--r-- 1 root root 1006161 Jan  6 05:50 postdata.json
  • do inference
# curl -X POST -H "Content-Type: application/octet-stream" -H "Inference-Header-Content-Length: <sizeof original postdata.json, 188 here>" -H "Content-Length: <size of final postdata.json, 1006161 here>" -H "Accept: */*" localhost:8000/v2/models/<model_name>/infer --data-binary "@postdata.json" -vv -o /workspace/myoutput
# cat /workspace/myoutput
{"model_name":"<model_name>","model_version":"1","outputs":[{"name":"OUTPUT","datatype":"BYTES","shape":[1,3],"parameters":{"binary_data_size":72}}]}0.723992:504:COFFEE MUG0.270952:968:CUP0.001160:967:ESPRESSO#

@zet809 thankyou for sharing. i read the binary data extension . But i dont understand that, in you comment, 1005970 is the image size in bytes ?. why do you add the image size in hex formated is (0x0f5992) in stuff_mug. ? and how to get image size in bytes ?, Could you tell me about that, Thank you

Read more comments on GitHub >

github_iconTop Results From Across the Web

Error when using Triton Server for Inference on deepstream ...
I'm running the code on a docker container of a vm instance in google cloud. Please download those two file I had shared...
Read more >
Use Triton Inference Server with Amazon SageMaker
SageMaker enables customers to deploy a model using custom code with NVIDIA Triton Inference Server. This functionality is available through the development ...
Read more >
Triton - KServe Documentation Website
Triton Inference Server expects tensors as input data, often times a pre-processing step is required before making the prediction call when the user...
Read more >
High-performance serving with Triton Inference Server (Preview)
Learn to deploy your model with NVIDIA Triton Inference Server in Azure Machine ... Specifically, this model performs image identification.
Read more >
Using Triton for production deployment of TensorRT models
Various assets (source code, shell scripts, and data files) used in this ... Triton clients send inference requests to the Triton server and ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found