question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ONNX returning different results than same PyTorch model

See original GitHub issue

This is my conversion code:

    y_pred = model(X)
    torch.onnx.export(model,
                      X,
                      'model.onnx',
                      export_params=True,
                      do_constant_folding=True,
                      dynamic_axes={'input.1': [0]})

    onnx_model = onnx.load('model.onnx')
    onnx.checker.check_model(onnx_model)

    ort_session = onnxruntime.InferenceSession('model.onnx')

    def to_numpy(tensor):
        return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
    inputs = [node for node in ort_session.get_inputs()][0]
    outputs = [node.name for node in ort_session.get_outputs()]

    pred_onx = ort_session.run(outputs, {inputs.name: to_numpy(X)})
    np.testing.assert_allclose(
        to_numpy(y_pred), pred_onx[0], rtol=1e-03, atol=1e-05)

Returns:

Mismatch: 100%
Max absolute difference: 0.9719614
Max relative difference: 2446.9866
 x: array([[0.925023],
       [0.829571],
       [0.740345],...
 y: array([[1.353681e-03],
       [4.933715e-03],
       [3.919864e-02],...

My model:

class Net(nn.Module):
    def __init__(self, in_features, dp0_amount=None, dp1_amount=None,):
        super(Net, self).__init__()
        self.dp0_amount = dp0_amount
        self.dp1_amount = dp1_amount

        self.dense1 = nn.Linear(in_features=in_features, out_features=128)
        self.dense2 = nn.Linear(in_features=128, out_features=16)

        self.dense3 = nn.Linear(in_features=16, out_features=1)

        self.bn1 = nn.BatchNorm1d(128)
        self.bn2 = nn.BatchNorm1d(16)
        if dp0_amount is not None:
            self.dp0 = nn.Dropout(dp0_amount)
        if dp1_amount is not None:
            self.dp1 = nn.Dropout(dp1_amount)

    def forward(self, x):
        if self.dp0_amount is not None:
            x = self.dp0(x)

        x = self.dense1(x)
        x = self.bn1(x)
        x = F.relu(x)
        if self.dp1_amount is not None:
            x = self.dp1(x)

        x = self.dense2(x)
        x = self.bn2(x)
        x = F.relu(x)

        x = self.dense3(x)
        return torch.sigmoid(x).view(-1, 1)

Versions:

onnx                     1.5.0
onnxruntime              1.1.1
torch                    1.4.0
numpy                    1.17.3

Python 3.7.7, Ubuntu 18.04

I also tried using the caffe2 backend instead of onnxruntime and it gives the same wrong results.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:3
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

14reactions
spandantiwaricommented, Jun 19, 2020

@pancho111203 - can you set your model to eval mode using model.eval() before export. That may be needed because ONNX export is mainly for inference and I see that you have dropout nodes in your model. Also, try using the latest opset by setting opset_version=11 in the export API call.

pytorch/pytorch#39046 that was suggested above, may not be the related to the issue you are seeing as that is possibly a different model.

2reactions
SystemErrorWangcommented, Jan 12, 2022

@pancho111203 - can you set your model to eval mode using model.eval() before export. That may be needed because ONNX export is mainly for inference and I see that you have dropout nodes in your model. Also, try using the latest opset by setting opset_version=11 in the export API call.

pytorch/pytorch#39046 that was suggested above, may not be the related to the issue you are seeing as that is possibly a different model.

Thank you for your suggestions, I tried to follw them but still got different results between onnx and pytorch. Would like to know if there’s any other possibilities

Read more comments on GitHub >

github_iconTop Results From Across the Web

Inference result is different between Pytorch and ONNX model
Hi,. I converted Pytorch model to ONNX model. However, output is different between two models like below. inference_result. inference environment. Pytorch.
Read more >
outputs are different between ONNX and pytorch
Problem solve by adding model.eval() before running inference of pytorch model in test code. Solution is from the link model = models.
Read more >
the inference result is totally different after converting onnx to ...
Hi, I try to convert pytorch pretrained model (https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) into openvino IR ...
Read more >
Operationalizing PyTorch Models Using ONNX and ... - Nvidia
ONNX : an open and interoperable format for ML models. Freedom to use tool(s) of ... For example, weight format difference between PyTorch...
Read more >
torch.onnx — PyTorch master documentation
This means that if your model is dynamic, e.g., changes behavior depending on input data, the export won't be accurate. Similarly, a trace...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found