Unsupported shape calculation for operator nonMaximumSuppression - CoreML to ONNX conversion
See original GitHub issue File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/onnxmltools/convert/main.py", line 18, in convert_coreml
custom_conversion_functions, custom_shape_calculators)
File "/usr/local/lib/python3.7/site-packages/onnxmltools/convert/coreml/convert.py", line 60, in convert
topology = parse_coreml(spec, initial_types, target_opset, custom_conversion_functions, custom_shape_calculators)
File "/usr/local/lib/python3.7/site-packages/onnxmltools/convert/coreml/_parse.py", line 467, in parse_coreml
topology.compile()
File "/usr/local/lib/python3.7/site-packages/onnxconverter_common/topology.py", line 632, in compile
self._infer_all_types()
File "/usr/local/lib/python3.7/site-packages/onnxconverter_common/topology.py", line 508, in _infer_all_types
operator.infer_types()
File "/usr/local/lib/python3.7/site-packages/onnxconverter_common/topology.py", line 110, in infer_types
registration.get_shape_calculator(self.type)(self)
File "/usr/local/lib/python3.7/site-packages/onnxconverter_common/registration.py", line 68, in get_shape_calculator
raise ValueError('Unsupported shape calculation for operator %s' % operator_name)
ValueError: Unsupported shape calculation for operator nonMaximumSuppression
I followed the steps to convert CoreML model to ONNX, this error arises on onnx_model = onnxmltools.convert_coreml(coreml_model, 'TinyYOLOv2')
The model was created using TuriCreate - Object Detection - Darknet
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:10
Top Results From Across the Web
Can't convert Core ML model to Onnx (then to Tensorflow Lite)
I'm trying to convert a trained Core ML model to TensorFlow Lite. ... 506 pass # in Keras converter, the shape calculator can...
Read more >MobileNetV2 + SSDLite with Core ML - Blog on Machine, Think!
First, we will convert the original model from TensorFlow to Core ML, ... object shapes, but SSD and also Turi Create use a...
Read more >TVM Change Log - Google Git
TE, TIR, TVMScript; AutoTVM, AutoScheduler, Meta Schedule; Operator Coverage; Training; Relay; MicroTVM, AOT, Graph Executor and VM; Arithmetic Analysis ...
Read more >NEWS.md · Gitee 极速下载/apache-tvm - Gitee.com
... to dense operator #5447; support dynamic NMS(Non Maximum Suppression), ... Get list of unsupported ONNX operators (#2995); Implement ONNX MaxPool-v8 and ...
Read more >[apache/incubator-tvm] Pre-release v0.7.0.rc0 - The Mail Archive
#5104 * Use leaky by default for LeakyReLU #5192 * Conv3D ONNX support ... #6303 * RESHAPE with dynamic shape arg in TFLite...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I just saw that there has been a PR for this. I will install the current master branch, merge the PR and check if it works.
Update: Nope, it is still not registered.
Unfortunately, it looks like support has basically been dropped for Core ML in this project. It’s going on 3 years where you can’t really use onnxmltools to convert CoreML.