question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ONNX export failed on ATen operator sort because torch.onnx.symbolic.sort does not exist

See original GitHub issue

Ran into this issue when trying to save a graph from an LSTM. Here’s a simple example which reproduces it on my setup:

# generate dummy data

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
# write out the model
from tensorboardX import SummaryWriter

NUM_FEATURES = 2
SEQ_LEN = 8
BATCH_SIZE = 4

# prepare dataset
torch.manual_seed(5)
x = torch.ones([BATCH_SIZE, SEQ_LEN, NUM_FEATURES])
x = x.cumsum(dim=2)
L = torch.ones(4, dtype=torch.long)
for i in range(x.shape[0]):
    idx = 2*(i+1)
    if idx>=x.shape[1]:
        L[i]=x.shape[1]
    else:
        x[i, idx:, :] = 0
        L[i] = idx
    
y = torch.ones([4, 1], dtype=torch.long)
y[0:2] = 0

class LSTM(nn.Module):
    def __init__(self):
        super(LSTM, self).__init__()
        self.hidden_dim = 10
        self.input_size = NUM_FEATURES
        self.target_size = 2
        self.hidden = None
        self.bidirectional = False
        
        if self.bidirectional:
            self.num_directions = 2
        else:
            self.num_directions = 1

        self.lstm = nn.LSTM(self.input_size, self.hidden_dim, 1,
        bidirectional=self.bidirectional)
        self.lstm.retain_variables = False

        self.hidden2target = nn.Linear(self.num_directions*self.hidden_dim, self.target_size)

    def init_hidden(self, batch_size):
        return (torch.zeros([self.num_directions, batch_size, self.hidden_dim], requires_grad=True, dtype=torch.float32), \
                torch.zeros([self.num_directions, batch_size, self.hidden_dim], requires_grad=True, dtype=torch.float32))

    def forward(self, seqs, T):
        # seqs is seq_len x batch_size x num_features
        # axes for input sequence:
        #   - The first axis is the sequence itself
        #   - the second indexes instances in the mini-batch
        #   - the third indexes elements of the input

        # reshape to match expected LSTM input
        # original: [batch, seq_len, input_feat]
        # after: [seq_len, batch, input_feat]
        #seqs = seqs.permute(1, 0, 2)

        # initialize hidden state
        self.hidden = self.init_hidden(seqs.size(0))
        
        # sort the batch by sequence length
        T, idx = T.sort(0, descending=True)
        seqs = seqs.index_select(0, idx)

        # pack the sequences
        seqs_packed = pack_padded_sequence(seqs, T.data, batch_first=True)

        lstm_out_packed, self.hidden = self.lstm(seqs_packed, self.hidden)

        # unpack the output
        lstm_out, _ = pad_packed_sequence(lstm_out_packed)

        # apply forward model to the final hidden state of seq - the [-1]
        targets = self.hidden2target(lstm_out[-1])

        return targets
    
model = LSTM()
model(x, L)

writer = SummaryWriter(comment='testing')
# below throws error
writer.add_graph(model, (x, L), verbose=False)

Error:

/home/alistairewj/.virtualenvs/torch-0.4.0-py3/lib/python3.5/site-packages/torch/onnx/utils.py:365: UserWarning: ONNX export failed on ATen operator sort because torch.onnx.symbolic.sort does not exist
  .format(op_name, op_name))

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-117-1d66713a60c5> in <module>()
     88 writer = SummaryWriter(comment='testing-packed')
     89 # below throws error
---> 90 writer.add_graph(model, (x, L), verbose=False)

~/.virtualenvs/torch-0.4.0-py3/lib/python3.5/site-packages/tensorboardX/writer.py in add_graph(self, model, input_to_model, verbose)
    433                 print('add_graph() only supports PyTorch v0.2.')
    434                 return
--> 435         self.file_writer.add_graph(graph(model, input_to_model, verbose))
    436 
    437     @staticmethod

~/.virtualenvs/torch-0.4.0-py3/lib/python3.5/site-packages/tensorboardX/graph.py in graph(model, args, verbose)
     98     if verbose:
     99         print(graph)
--> 100     list_of_nodes = parse(graph)
    101     nodes = []
    102     node_stats = []

~/.virtualenvs/torch-0.4.0-py3/lib/python3.5/site-packages/tensorboardX/graph.py in parse(graph)
     20 
     21         uname = next(iter(n.outputs())).uniqueName()
---> 22         assert n.scopeName() != '', '{} has empty scope name'.format(n)
     23         scope[uname] = n.scopeName()
     24     if LooseVersion(torch.__version__) >= LooseVersion("0.4"):

AssertionError: %30 : Dynamic = onnx::Shape(%12)
 has empty scope name

Seems related to #145 - is this also not implemented in onnx? I’m using pytorch==0.4.0 and the latest version of tensorboardX (installed from GitHub).

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:4
  • Comments:8 (3 by maintainers)

github_iconTop GitHub Comments

4reactions
miguelvrcommented, Aug 29, 2018

having the same issue with LSTM (no sorting operation)… any news on this? pytorch 0.4.1 seems like a stable version, I think it’s probably worth it to sort this out

2reactions
danakianfarcommented, Jul 6, 2018

Any update on this issue? @lanpa

Read more comments on GitHub >

github_iconTop Results From Across the Web

onnx/symbolic_helper.py · neilisaac/torch - Gemfury
'onnx::Constant': raise RuntimeError("Failed to export an ONNX attribute '" + v.node().kind() + "', since it's not constant, please try to make " "things ......
Read more >
torch.onnx — PyTorch master documentation
This mode can be used to export any operator (ATen or non-ATen) that is not registered and supported in ONNX. Exported falls through...
Read more >
PyTorch to ONNX export, ATen operators not supported ...
Exports succeeds if I set the parameter operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK which means 'leave as is ATen ...
Read more >
Python API: torch/onnx/utils.py Source File - Caffe2
ONNX_ATEN_FALLBACK: if symbolic is missing,. 91 fall back on ATen op. 92 OperatorExportTypes.RAW: export raw ir.
Read more >
ONNX supported TorchScript operators - PyTorch
ONNX support for TorchScript operators ... aten::sort. Since opset 9. aten::split. Since opset 9 ... Operators that are not yet supported ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found