InferenceError: [TypeInferenceError] Cannot infer type and shape for node name TreeEnsembleClassifier. No opset import for domainai.onnx.ml optype TreeEnsembleClassifier
See original GitHub issueBug Report
Is the issue related to model conversion?
Describe the bug
- I built an ONNX model for XGBoost and another one for MLP. Then I want to merge them together. I use
merge_graphs
because I need to manipulate the graphs a bit just for practice. An error cames below:
InferenceError Traceback (most recent call last)
<ipython-input-5-094e20c68907> in <module>
15 graph_mlp,
16 io_map = [('input', 'input_1')],
---> 17 outputs = ["output_label", "output_1"]
18 )
~/miniconda3/lib/python3.7/site-packages/onnx/compose.py in merge_graphs(g1, g2, io_map, inputs, outputs, prefix1, prefix2, name, doc_string)
156
157 if len(g1_inputs) < len(g1.input) or len(g1_outputs) < len(g1.output):
--> 158 e1 = utils.Extractor(helper.make_model(g1))
159 g1 = e1.extract_model(g1_inputs, g1_outputs).graph
160
~/miniconda3/lib/python3.7/site-packages/onnx/utils.py in __init__(self, model)
13 class Extractor:
14 def __init__(self, model: ModelProto) -> None:
---> 15 self.model = onnx.shape_inference.infer_shapes(model)
16 self.graph = self.model.graph
17 self.wmap = self._build_name2obj_dict(self.graph.initializer)
~/miniconda3/lib/python3.7/site-packages/onnx/shape_inference.py in infer_shapes(model, check_type, strict_mode, data_prop)
32 if isinstance(model, (ModelProto, bytes)):
33 model_str = model if isinstance(model, bytes) else model.SerializeToString()
...
---> 34 inferred_model_str = C.infer_shapes(model_str, check_type, strict_mode, data_prop)
35 return onnx.load_from_string(inferred_model_str)
36 elif isinstance(model, str):
InferenceError: [TypeInferenceError] Cannot infer type and shape for node name TreeEnsembleClassifier. No opset import for domainai.onnx.ml optype TreeEnsembleClassifier
- The I tried to merge two onnx models instead of onnx graphs. The domain errors shows up. I think I set up the same domains.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-19-7360c41ba1b0> in <module>
3 onnx_mlp,
4 io_map = [('input', 'input_1')],
----> 5 outputs = ["output_label", "output_1"]
6 )
~/miniconda3/lib/python3.7/site-packages/onnx/compose.py in merge_models(m1, m2, io_map, inputs, outputs, prefix1, prefix2, name, doc_string, producer_name, producer_version, domain, model_version)
301 if entry.version != found_version:
302 raise ValueError(
--> 303 "Can't merge two models with different operator set ids for a given domain. "
304 f"Got: {m1.opset_import} and {m2.opset_import}")
305 else:
ValueError: Can't merge two models with different operator set ids for a given domain. Got: [domain: ""
version: 9
, domain: "ai.onnx.ml"
version: 1
] and [domain: ""
version: 13
, domain: "ai.onnx.ml"
version: 2
]
System information
- OS Platform and Distribution ( Linux Ubuntu 18.04):
- ONNX version (1.12.0)
- skl2onnx (1.12)
- xgboost (1.6.2)
- tensorflow (2.2.0)
- Python version (3.7.6)
Reproduction instructions
Code to reproduce errors
# %%
import warnings
warnings.filterwarnings('ignore')
import os
import time
import numpy as np
import xgboost as xgb
from sklearn.datasets import make_classification
from sklearn import metrics
import onnxruntime as rt
from skl2onnx.common.data_types import FloatTensorType, Int64TensorType
from skl2onnx import convert_sklearn, to_onnx, update_registered_converter, get_model_alias
from skl2onnx.common.shape_calculator import (
calculate_linear_classifier_output_shapes,
calculate_linear_regressor_output_shapes)
from onnxmltools.convert.xgboost.operator_converters.XGBoost import (
convert_xgboost)
from skl2onnx.proto import onnx_proto
from skl2onnx.common._registration import get_shape_calculator
from onnxconverter_common.onnx_ops import (
apply_identity, apply_concat, apply_add, apply_mul, apply_slice, apply_cast,
apply_squeeze
)
import onnx
import tensorflow as tf
import tf2onnx
from tf2onnx import tf_loader
# %%
X, y = make_classification(n_samples=8100)
XE_trn, yE_trn = X[:8000], y[:8000]
yE_trn_onehot = np.zeros((len(yE_trn), 2))
yE_trn_onehot[np.arange(len(yE_trn)), yE_trn] = 1.0
XE_tst, yE_tst = X[8000:], y[8000:]
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(2, activation='softmax'))
model.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.Adam(0.0001))
model.fit(XE_trn, yE_trn_onehot, batch_size=256, epochs=10, verbose=0)
print()
onnx_model,_ = tf2onnx.convert.from_keras(model, opset=13)
onnx.save(onnx_model, 'tf_mlp.onnx')
# %%
X, y = make_classification(n_samples=10000)
XA_trn, yA_trn = X[:8000], y[:8000]
XA_tst, yA_tst = X[8000:], y[8000:]
clfA = xgb.XGBClassifier(n_estimators=100,
learning_rate=0.01,
eval_metric='error',
n_jobs=-1
[bug.zip](https://github.com/onnx/onnx/files/9435141/bug.zip)
)
clfA = clfA.fit(XA_trn, yA_trn)
update_registered_converter(
xgb.XGBClassifier, 'XGBoostXGBClassifier',
calculate_linear_classifier_output_shapes,
convert_xgboost,
options={'nocl': [True, False], 'zipmap': [True, False, 'columns']})
model_onnx = convert_sklearn(
clfA, 'xgb',
[('input', FloatTensorType([None, 20]))],
target_opset={'': 13, 'ai.onnx.ml': 2}
)
print(model_onnx.opset_import)
with open('xgb.onnx', 'wb') as f:
f.write(model_onnx.SerializeToString())
# %%
onnx_mlp = onnx.load('tf_mlp.onnx')
onnx_xgb = onnx.load('xgb.onnx')
graph_mlp = onnx_mlp.graph
graph_xgb = onnx_xgb.graph
# for node in graph_mlp.node:
# print(node.name, '->', node.op_type, '->', node.input, '->', node.output)
# print('')
# for node in graph_xgb.node:
# print(node.name, '->', node.op_type, '->', node.input, '->', node.output)
graph_merge = onnx.compose.merge_graphs(
graph_xgb,
graph_mlp,
io_map = [('input', 'input_1')],
outputs = ["output_label", "output_1"]
)
# %%
onnx_two = onnx.compose.merge_models(
onnx_xgb,
onnx_mlp,
io_map = [('input', 'input_1')],
outputs = ["output_label", "output_1"]
)
Expected behavior
Notes
I attached the notebook that I used.
Issue Analytics
- State:
- Created a year ago
- Comments:10 (5 by maintainers)
Top Results From Across the Web
What is the opset number? — sklearn-onnx 1.11.2 ...
An ONNX graph only contains one unique opset, every node must be described following the specifications defined by the latest opset below the...
Read more >How to extract layer shape and type from ONNX / PyTorch?
I'm trying to take a pytorch model, and automate the translation to the other framework, which contains similar types of layers (i.e. conv2d, ......
Read more >Creating and Modifying ONNX Model Using ONNX Python API
It represents an IO tensor in the model in which only the data type and shape were defined. Creating ONNX Model. To better...
Read more >mlprodict - PyPI
The packages explores ways to productionize machine learning predictions. One approach uses ONNX and tries to implement a runtime in python / numpy...
Read more >Model optimizer / ONNX resize node issue - Intel Communities
Hi Emiliana,. What is the opset version that you are using? The latest OpenVINO version (2020.1) supports resize operation for opset-10 version.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hi, TreeEnsembleClassifier belongs to the domain
ai.onnx.ml
. So, the model needs to specify the version of the opsetai.onnx.ml
it is using.You can directly update the opset-version for “ai.onnx.ml” as below, I think:
Note that you need to make sure the import for both domains are consistent across the 2 models. Xavier’s code above is only for changing the version of domain ‘’, the standard domain.
I figured out a workaround. If you want to combine a tf model and a scikit learn model together. You can use an “Identity” node to connect the two graphs. But after merging graphs, do not check model since it still cause trouble. You just use
skl2onnx
to load onnx model.skl2onnx
is kind of a superset ofonnx
in terms operator set.