Config.pbtx for Efficientdet-D0
See original GitHub issueDescription
I am deploying detection model Efficientdet-D0 via tensorflow savedmodel, and facing some issues while deploying the model using nvcr.io/nvidia/tritonserver:20.10-py3
. It is about the dimension of output tensor, it is [1, -1, 7] and triton-server expects -1 in first dimension, I also tried to make it [-1, 7] as well, but it is still not working.
My config.pbtxt file
name: "EfficientDet"
platform: "tensorflow_savedmodel"
max_batch_size: 1
input [
{
name: "image_arrays:0"
data_type: TYPE_UINT8
format: FORMAT_NHWC
dims: [-1, -1, 3]
}
]
output [
{
name: "detections:0"
data_type: TYPE_FP32
dims: [1, -1, 7 ]
}
]
I am using docker container nvcr.io/nvidia/tritonserver:20.10-py3
You can get saved_model from here : [https://tfhub.dev/tensorflow/efficientdet/d0/1]
I expected config.pbtxt to work with output tensor having dimensions of [1, 7].
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
EfficientDet-D0 trained and exported in Tensorflow 2.0 Object ...
config - Use the config file: None [ WARNING ] Failed to import Inference Engine Python API in: PYTHONPATH [ WARNING ] DLL...
Read more >End-to-end Object Detection Using EfficientDet on Raspberry ...
This configuration file defines the network architecture and network params of EfficientDet. This is needed to customize the architecture so it can detect...
Read more >TF2 Object Detection API - EfficientDet - Kaggle
Explore and run machine learning code with Kaggle Notebooks | Using data from multiple data sources.
Read more >How to Train Your Own Object Detector Using TensorFlow ...
This is the final step of our Installation and Setup block! We're going to install the Object Detection API itself. You do this...
Read more >finetuning EfficientDet-D0 from model zoo on PASCALVOC ...
Next I took one of the config files and adjusted it to fit the architecture and VOC dataset. When evaluating the resulting network...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Please read this section of the documentation carefully: https://github.com/triton-inference-server/server/blob/master/docs/model_configuration.md#inputs-and-outputs
Setting max_batch_size > 0 has a specific requirements on your model inputs and outputs. You should also consider not providing a config.pbtxt and using --strict-model-config=false and see what Triton generates for you: https://github.com/triton-inference-server/server/blob/master/docs/model_configuration.md#auto-generated-model-configuration
@AkashDharani It looks like the format of saved mode is TF2.0 saved model. Can you try invoking the server with
--backend-config=tensorflow,version=2
?