question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Config.pbtx for Efficientdet-D0

See original GitHub issue

Description I am deploying detection model Efficientdet-D0 via tensorflow savedmodel, and facing some issues while deploying the model using nvcr.io/nvidia/tritonserver:20.10-py3. It is about the dimension of output tensor, it is [1, -1, 7] and triton-server expects -1 in first dimension, I also tried to make it [-1, 7] as well, but it is still not working.

My config.pbtxt file

name: "EfficientDet"
platform: "tensorflow_savedmodel"
max_batch_size: 1
input [
   {
      name: "image_arrays:0"
      data_type: TYPE_UINT8
      format: FORMAT_NHWC
      dims: [-1, -1, 3]
   }
]
output [
   {
      name: "detections:0"
      data_type: TYPE_FP32
      dims: [1, -1, 7 ]
   }
]

I am using docker container nvcr.io/nvidia/tritonserver:20.10-py3

You can get saved_model from here : [https://tfhub.dev/tensorflow/efficientdet/d0/1]

I expected config.pbtxt to work with output tensor having dimensions of [1, 7].

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
deadeyegoodwincommented, Nov 5, 2020

Please read this section of the documentation carefully: https://github.com/triton-inference-server/server/blob/master/docs/model_configuration.md#inputs-and-outputs

Setting max_batch_size > 0 has a specific requirements on your model inputs and outputs. You should also consider not providing a config.pbtxt and using --strict-model-config=false and see what Triton generates for you: https://github.com/triton-inference-server/server/blob/master/docs/model_configuration.md#auto-generated-model-configuration

1reaction
tanmayv25commented, Nov 6, 2020

@AkashDharani It looks like the format of saved mode is TF2.0 saved model. Can you try invoking the server with --backend-config=tensorflow,version=2 ?

Read more comments on GitHub >

github_iconTop Results From Across the Web

EfficientDet-D0 trained and exported in Tensorflow 2.0 Object ...
config - Use the config file: None [ WARNING ] Failed to import Inference Engine Python API in: PYTHONPATH [ WARNING ] DLL...
Read more >
End-to-end Object Detection Using EfficientDet on Raspberry ...
This configuration file defines the network architecture and network params of EfficientDet. This is needed to customize the architecture so it can detect...
Read more >
TF2 Object Detection API - EfficientDet - Kaggle
Explore and run machine learning code with Kaggle Notebooks | Using data from multiple data sources.
Read more >
How to Train Your Own Object Detector Using TensorFlow ...
This is the final step of our Installation and Setup block! We're going to install the Object Detection API itself. You do this...
Read more >
finetuning EfficientDet-D0 from model zoo on PASCALVOC ...
Next I took one of the config files and adjusted it to fit the architecture and VOC dataset. When evaluating the resulting network...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found