question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Perf_Analyzer always throwing 'std::length_error'

See original GitHub issue

Description We are trying perf_analyzer to profile our custom model for TTS with Triton. We are trying with various combinations of the options like the following.

perf_client -m nemo_model1 -u $host:$port -i gRPC --concurrency-range 1:4:2
perf_client -m nemo_model1 -u "$host:$port" -i gRPC -v -v --input-data="random" --string-length 61

But every time we got the following error:

name: "nemo_model1"
versions: "1"
platform: "python"
inputs {
  name: "input__0"
  datatype: "BYTES"
  shape: -1
  shape: -1
}
outputs {
  name: "output__0"
  datatype: "INT16"
  shape: -1
  shape: -1
}

config {
  name: "nemo_model1"
  version_policy {
    latest {
      num_versions: 1
    }
  }
  max_batch_size: 128
  input {
    name: "input__0"
    data_type: TYPE_STRING
    dims: -1
  }
  output {
    name: "output__0"
    data_type: TYPE_INT16
    dims: -1
  }
  instance_group {
    name: "nemo_model1_0"
    count: 1
    gpus: 0
    kind: KIND_GPU
  }
  dynamic_batching {
    preferred_batch_size: 128
  }
  optimization {
    input_pinned_memory {
      enable: true
    }
    output_pinned_memory {
      enable: true
    }
  }
  backend: "python"
}

terminate called after throwing an instance of 'std::length_error'
  what():  vector::_M_default_append
Aborted (core dumped)

What are we doing wrong in invoking the command.

Triton Information We are currently using v2.8.0.

Are you using the Triton container or did you build it yourself? We have used the container

FROM nvcr.io/nvidia/tritonserver:21.03-py3

ADD models /models

# Setup Python
RUN apt update
RUN apt upgrade -y
RUN apt install -y software-properties-common libfreetype6-dev libsndfile-dev
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt install -y python3-dev python3-venv
RUN /usr/bin/python3 -m pip install --upgrade pip

# Install NeMo
RUN pip install torch==1.10.1
ADD NeMo NeMo/
ADD nemo.patch NeMo/
WORKDIR NeMo
RUN patch -p1 < nemo.patch
RUN pip install .[tts]
WORKDIR ../
ENV PYTHONPATH=NeMo
# Copy hardcoded models
RUN mkdir models
COPY default_model/imda-only.txt default_model/imda-en_g2p.pt models/
RUN apt-get clean autoclean && apt-get autoremove --yes && rm -rf /var/lib/{apt,dpkg,cache,log}/

CMD ["/opt/tritonserver/bin/tritonserver", "--model-repository=/models"]

Following is our model configuration file:

name: "nemo_model1"
backend: "python"
max_batch_size: 128
input {
    name: "input__0"
    data_type: TYPE_STRING
    dims: [-1]
  }
output [
  {
    name: "output__0"
    data_type: TYPE_INT16
    dims: [-1]
  }
]
dynamic_batching {}
instance_group {
      count: 1
      kind: KIND_GPU
  }

How to overcome the error with perf_analyzer. We are invoking perf_analyzer on a Ubuntu 20.04 node. We have installed the required dependencies. However for the following two packages, there were no matching candidate, so we installed the available packages for them.

libopencv-dev=3.2.0+dfsg-4ubuntu0.1 \
libopencv-core-dev=3.2.0+dfsg-4ubuntu0.1 \

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:13 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
GuanLuocommented, Mar 30, 2022

@tanmayv25 can correct me if I am wrong. The --shape should be matching the dims in your input model config, so it should just need to be 1 dimension, and perf analyzer will handle the batch dimension for you. And from the input.json that you given, your input has only 1 element where the content is a string, so you should try --shape input__0:1. You can refer to the perf analyzer doc for more detail.

1reaction
Tabriziancommented, Mar 29, 2022

The perf analyzer’s --shape argument looks incorrect. It should be --shape input__0:1,33. https://github.com/triton-inference-server/server/blob/main/docs/perf_analyzer.md#input-data

Perf Analyzer’s error handling needs to be improved to print better error message instead of raising an exception. I have filed a ticket for this.

Read more comments on GitHub >

github_iconTop Results From Across the Web

No results found

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found