question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Fail to analyze ensemble model: "inference.ModelConfig" should not have multiple "scheduling_choice" oneof fields

See original GitHub issue

When I use model-analyzer to analyze a ensemble model with local luanch mode, it always fails with following error:

root@dl:/inference# model-analyzer profile --checkpoint-directory checkpoints -m $PWD/model_repo --profile-models quartznet-ensemble --output-model-repository-path=/output_repo/temp --override-output-model-repository --client-protocol grpc --run-config-search-max-concurrency 800 --run-config-search-max-instance-count 2 --run-config-search-max-preferred-batch-size 64

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/json_format.py", line 538, in _ConvertFieldValuePair
    raise ParseError('Message type "{0}" should not have multiple '
google.protobuf.json_format.ParseError: Message type "inference.ModelConfig" should not have multiple "scheduling_choice" oneof fields.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/model-analyzer", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.8/dist-packages/model_analyzer/entrypoint.py", line 315, in main
    analyzer.profile(client=client)
  File "/usr/local/lib/python3.8/dist-packages/model_analyzer/analyzer.py", line 104, in profile
    self._model_manager.run_model(model=model)
  File "/usr/local/lib/python3.8/dist-packages/model_analyzer/model_manager.py", line 84, in run_model
    self._run_model_with_search(model)
  File "/usr/local/lib/python3.8/dist-packages/model_analyzer/model_manager.py", line 138, in _run_model_with_search
    self._run_model_config_sweep(model, search_model_config=True)
  File "/usr/local/lib/python3.8/dist-packages/model_analyzer/model_manager.py", line 167, in _run_model_config_sweep
    self._run_config_generator.generate_run_config_for_model_sweep(
  File "/usr/local/lib/python3.8/dist-packages/model_analyzer/config/run/run_config_generator.py", line 98, in generate_run_config_for_model_sweep
    model_config = ModelConfig.create_from_dictionary(
  File "/usr/local/lib/python3.8/dist-packages/model_analyzer/triton/model/model_config.py", line 117, in create_from_dictionary
    protobuf_message = json_format.ParseDict(model_dict,
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/json_format.py", line 454, in ParseDict
    parser.ConvertMessage(js_dict, message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/json_format.py", line 485, in ConvertMessage
    self._ConvertFieldValuePair(value, message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/json_format.py", line 599, in _ConvertFieldValuePair
    raise ParseError(str(e))
google.protobuf.json_format.ParseError: Message type "inference.ModelConfig" should not have multiple "scheduling_choice" oneof fields.

The model repository I used can be downloaded here.

Issue Analytics

  • State:open
  • Created 2 years ago
  • Reactions:1
  • Comments:7 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
msalehiNVcommented, Nov 16, 2022

@dhaval24 it’s on our short term roadmap since it is a high priority feature. I can share a more granular timeline with you over the email thread we have

1reaction
Tabriziancommented, Dec 29, 2021

@okanlv We don’t have any updates regarding the ensemble support. We’ll update this issue as soon as more information is available.

Read more comments on GitHub >

github_iconTop Results From Across the Web

nvidia_inferenceserver - Go Packages
GRPCInferenceServiceClient is the client API for GRPCInferenceService service. For semantics around ctx use and closing/ending streaming RPCs, please refer to ...
Read more >
Getting Started MovieLens: Serving a TensorFlow Model
Before we get started, you should launch the Triton Inference Server docker ... Below, we will request the Triton server to load the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found