C++ Inferencing using Torchscript Exported Torchvision model Erorr
See original GitHub issue🐛** C++ Inferencing using Torchscript Exported Torchvision model Erorr
I’m trying to use this approach to make my model (Mobilenetv3 small) using Torchvison models, In train and validation phase (python) worked Whiteout any problem but after saving Torchscript to use in c++ inference, got this error:
terminate` called after throwing an instance of 'torch::jit::ErrorReport'
what():
Unknown type name 'NoneType':
Serialized File "code/__torch__/torch/nn/modules/linear.py", line 6
training : bool
_is_full_backward_hook : Optional[bool]
def forward(self: __torch__.torch.nn.modules.linear.Identity) -> NoneType:
~~~~~~~~ <--- HERE
return None
class Linear(Module):
Aborted (core dumped)
My simplified Torchscript exporting code:
import sys
import time
from pathlib import Path
import torch
import torch.nn as nn
from model import initialize_model,BSConv_init
num_classes=14
device = torch.device('cpu')
model = models.mobilenet_v3_small(pretrained=use_pretrained)
num_ftrs = model.classifier[3].in_features
model.classifier[3] = nn.Linear(num_ftrs, num_classes)
model = model.to(device)
checkpoint = torch.load('checkpoint/best_model_MobBsconv_ckpt.t7', map_location=device)
model.load_state_dict(checkpoint['model'])
# Input
img = torch.rand(1, 3, 224, 224).to(device)
model.eval()
ts = torch.jit.trace(model, img, strict=False)
ts.save("traced_mob_bsconv_model.pt")
this exporting script run successfully, but using c++ produce error. this is my simpilified C++ code that works for other models:
try{ this->module = torch::jit::load(ModelAddress); }catch (const c10::Error& e) { std::cerr << "error loading the model: " << e.what() << std::endl; std::exit(EXIT_FAILURE); } half_ = (device_ != torch::kCPU); this->module.to(device_); if (half_) { module.to(torch::kHalf); } torch::NoGradGuard no_grad; module.eval();
Even got error until this initializing, but my other exported models work fine at forward and … .
I’m confused and need help.
Environment
env 1: System which trained and export torchscript (by above code): OS: Ubuntu 18.04.5 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final) CMake version: version 3.10.2 Libc version: glibc-2.25
Python version: 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0] (64-bit runtime) Python platform: Linux-5.4.0-48-generic-x86_64-with-Ubuntu-18.04-bionic Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce GTX 1060 6GB Nvidia driver version: 450.66 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5 HIP runtime version: N/A MIOpen runtime version: N/A
Versions of relevant libraries: [pip3] numpy==1.19.4 [pip3] torch==1.9.0 [pip3] torchvision==0.10.0
env 2: system which run c++ code and got error:
OS: Ubuntu 18.04.4 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: Could not collect CMake version: version 3.10.2 Libc version: glibc-2.15
Python version: 2.7.17 (default, Jul 20 2020, 15:37:01) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.4.0-42-generic-x86_64-with-Ubuntu-18.04-bionic Is CUDA available: N/A CUDA runtime version: Could not collect GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A
Versions of relevant libraries: [pip3] numpy==1.18.1
Additional context
Issue Analytics
- State:
- Created 2 years ago
- Comments:21 (8 by maintainers)
Top GitHub Comments
I will try using mobilenetv3 directly to see if I can reproduce.
@gmagogsfm can you have a look? Seems like an issue in the interpreter