question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How do I create properly formatted input_data_file for warmup

See original GitHub issue

I’ve read over the docs, model_config.proto, as well as the warmup tests, and I don’t quite understand the file format for input_data_file.

I’m using the tensorflow backend, latest release of triton-inference-server. My config.pbtxt is as follows:

  platform: "tensorflow_savedmodel"
  max_batch_size: 0
  input [
    {
      name: "a"
      data_type: TYPE_STRING
      dims: [ 1 ]
      reshape: { shape: [ 1 ] }
    },
    {
      name: "b"
      data_type: TYPE_STRING
      dims: [ 1 ]
      reshape: { shape: [ 1 ] }
    },
    {
      name: "c"
      data_type: TYPE_STRING
      dims: [ 1 ]
      reshape: { shape: [ 1 ] }
    },
        {
      name: "d"
      data_type: TYPE_BOOL
      dims: [ 1 ]
      reshape: { shape: [ 1 ] }
    }
  ]
  output [
    {
      name: "output_0"
      data_type: TYPE_INT32
      dims: [ 1 ]
      reshape: { shape: [ 1 ] }
    }
  ]
  model_warmup [
    {
        name: "warmup_data"
        batch_size: 1
        inputs: {
            key: "a"
            value: {
                data_type: TYPE_STRING
                dims: [ 1 ]
                input_data_file: "raw_a"
            }
        }
        inputs: {
            key: "b"
            value: {
                data_type: TYPE_STRING
                dims: [ 1 ]
                input_data_file: "raw_b"
            }
        }
        inputs: {
            key: "c"
            value: {
                data_type: TYPE_STRING
                dims: [ 1 ]
                input_data_file: "raw_c"
            }
        }
        inputs: {
            key: "d"
            value: {
                data_type: TYPE_BOOL
                dims: [ 1 ]
                input_data_file: "raw_d"
            }
        }
    }
]

I’m struggling with how to create a raw file in the necessary format. I tried the following:

import numpy as np
np.array(["string_for_arg_a"]).tofile("raw_a")

But when I run tritonserver, I get the following error:

E0124 00:21:54.463405 1261 triton_model_instance.cc:86] warmup error: Invalid argument - unexpected number of string elements 2 for inference input 'a', expecting 1

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
lminercommented, Jan 26, 2022

I worked! Thank you.

0reactions
tanmayv25commented, Jan 26, 2022

@lminer You must write the serialized item to the file and not the ndarray object itself.

Try the following:

import numpy as np
from tritonclient.utils import serialize_byte_tensor


  serialized = serialize_byte_tensor(
      np.array(["string_for_arg_a".encode("utf-8")], dtype=object)
  )
  with open("raw_a", "wb") as fh:
    fh.write(serialized.item())

Read more about the utility and what it returns from here https://github.com/triton-inference-server/client/blob/main/src/python/library/tritonclient/utils/__init__.py#L187

Read more comments on GitHub >

github_iconTop Results From Across the Web

3.3 - Formatted Input | STAT 480
The INPUT statement uses @n absolute pointer controls to move the input pointer first to column 18 to read l_name, then back to...
Read more >
Output Data Tool | Alteryx Help
Select file format Options. Options vary based on the file or database to which you connect. · (Optional) Select Take File/Table Name From...
Read more >
Using Explicitly Formatted Input/Output
On input, read data from the file and format it according to the format code. If the data type of the input data...
Read more >
8 Formatting Analyses, Views, and Dashboard Pages
This chapter explains how to apply formatting to analyses, views, and dashboard pages in Oracle Business Intelligence Enterprise Edition.
Read more >
Common Data Formats for Training - Amazon SageMaker
See the AlgorithmSpecification for additional details on the training input mode. Using CSV Format. Many Amazon SageMaker algorithms support training with data ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found