question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Support torchscript export for FCOS

See original GitHub issue

I am trying to export a model trained with a custom dataset to torchscript.

Any help would be appreciated

Instructions To Reproduce the Issue and Full Logs

I trained a model and now I am trying to export it to torchscript but I am having some problems with it

I run the following command: export_lazy_config.py --config-file "path to .yaml with the config" --format torchscript --export-method scripting --output /export_output

I got the following error: Module ‘ResNet’ has no attribute ‘_out_features’ (This attribute exists on the Python module, but we failed to convert Python type: ‘omegaconf.listconfig.ListConfig’ to a TorchScript type. Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type ListConfig… Its type was inferred; try adding a type annotation for the attribute.)

Here is the log:

Command line arguments: Namespace(format=‘torchscript’, export_method=‘scripting’, config_file=‘…/…/…/…/media/naeem/T7/trainers/fcos_R_50_FPN_1x.py/output/config.yaml’, sample_image=None, run_eval=False, output=‘/export_output’, opts=[]) [05/24 17:31:06 detectron2]: Command line arguments: Namespace(format=‘torchscript’, export_method=‘scripting’, config_file=‘…/…/…/…/media/naeem/T7/trainers/fcos_R_50_FPN_1x.py/output/config.yaml’, sample_image=None, run_eval=False, output=‘/export_output’, opts=[]) [05/24 17:31:06 detectron2]: Contents of args.config_file=…/…/…/…/media/naeem/T7/trainers/fcos_R_50_FPN_1x.py/output/config.yaml: dataloader: evaluator: {target: detectron2.evaluation.COCOEvaluator, dataset_name: ‘${…test.dataset.names}’} test: target: detectron2.data.build_detection_test_loader dataset: {target: detectron2.data.get_detection_dataset_dicts, filter_empty: false, names: maize_valid} mapper: target: detectron2.data.DatasetMapper augmentations: - {target: detectron2.data.transforms.ResizeShortestEdge, max_size: 1333, short_edge_length: 800} image_format: ${…train.mapper.image_format} is_train: false num_workers: 0 train: target: detectron2.data.build_detection_train_loader dataset: {target: detectron2.data.get_detection_dataset_dicts, names: maize_train} mapper: target: detectron2.data.DatasetMapper augmentations: - target: detectron2.data.transforms.ResizeShortestEdge max_size: 1333 sample_style: choice short_edge_length: [640, 672, 704, 736, 768, 800] - {target: detectron2.data.transforms.RandomFlip, horizontal: true} image_format: BGR is_train: true use_instance_mask: false num_workers: 4 total_batch_size: 2 lr_multiplier: target: detectron2.solver.WarmupParamScheduler scheduler: target: fvcore.common.param_scheduler.MultiStepParamScheduler milestones: [60000, 80000, 90000] values: [1.0, 0.1, 0.01] warmup_factor: 0.001 warmup_length: 0.011111111111111112 warmup_method: linear model: target: detectron2.modeling.FCOS backbone: target: detectron2.modeling.FPN bottom_up: target: detectron2.modeling.ResNet freeze_at: 2 out_features: [res3, res4, res5] stages: {target: detectron2.modeling.ResNet.make_default_stages, depth: 50, norm: FrozenBN, stride_in_1x1: true} stem: {target: detectron2.modeling.backbone.BasicStem, in_channels: 3, norm: FrozenBN, out_channels: 64} in_features: [res3, res4, res5] out_channels: 256 top_block: {target: detectron2.modeling.backbone.fpn.LastLevelP6P7, in_channels: 256, in_feature: p5, out_channels: ‘${…out_channels}’} focal_loss_alpha: 0.25 focal_loss_gamma: 2.0 head: target: detectron2.modeling.meta_arch.fcos.FCOSHead conv_dims: [256, 256, 256, 256] input_shape: - &id001 !!python/object/new:detectron2.layers.shape_spec.ShapeSpec [256, null, null, null] - *id001 - *id001 - *id001 - *id001 norm: GN num_classes: ${…num_classes} prior_prob: 0.01 head_in_features: [p3, p4, p5, p6, p7] num_classes: 10 pixel_mean: [103.53, 116.28, 123.675] pixel_std: [1.0, 1.0, 1.0] test_nms_thresh: 0.6 test_score_thresh: 0.2 optimizer: target: torch.optim.SGD lr: 0.01 momentum: 0.9 params: {target: detectron2.solver.get_default_optimizer_params, weight_decay_norm: 0.0} weight_decay: 0.0001 train: amp: {enabled: false} checkpointer: {max_to_keep: 100, period: 5000} ddp: {broadcast_buffers: false, find_unused_parameters: false, fp16_compression: false} device: cuda eval_period: 5000 init_checkpoint: ‘’ log_period: 20 max_iter: 1500 output_dir: /media/naeem/T7/trainers/fcos_R_50_FPN_1x.py/output/ [05/24 17:31:06 detectron2]: Contents of args.config_file=…/…/…/…/media/naeem/T7/trainers/fcos_R_50_FPN_1x.py/output/config.yaml: dataloader: evaluator: {target: detectron2.evaluation.COCOEvaluator, dataset_name: ‘${…test.dataset.names}’} test: target: detectron2.data.build_detection_test_loader dataset: {target: detectron2.data.get_detection_dataset_dicts, filter_empty: false, names: maize_valid} mapper: target: detectron2.data.DatasetMapper augmentations: - {target: detectron2.data.transforms.ResizeShortestEdge, max_size: 1333, short_edge_length: 800} image_format: ${…train.mapper.image_format} is_train: false num_workers: 0 train: target: detectron2.data.build_detection_train_loader dataset: {target: detectron2.data.get_detection_dataset_dicts, names: maize_train} mapper: target: detectron2.data.DatasetMapper augmentations: - target: detectron2.data.transforms.ResizeShortestEdge max_size: 1333 sample_style: choice short_edge_length: [640, 672, 704, 736, 768, 800] - {target: detectron2.data.transforms.RandomFlip, horizontal: true} image_format: BGR is_train: true use_instance_mask: false num_workers: 4 total_batch_size: 2 lr_multiplier: target: detectron2.solver.WarmupParamScheduler scheduler: target: fvcore.common.param_scheduler.MultiStepParamScheduler milestones: [60000, 80000, 90000] values: [1.0, 0.1, 0.01] warmup_factor: 0.001 warmup_length: 0.011111111111111112 warmup_method: linear model: target: detectron2.modeling.FCOS backbone: target: detectron2.modeling.FPN bottom_up: target: detectron2.modeling.ResNet freeze_at: 2 out_features: [res3, res4, res5] stages: {target: detectron2.modeling.ResNet.make_default_stages, depth: 50, norm: FrozenBN, stride_in_1x1: true} stem: {target: detectron2.modeling.backbone.BasicStem, in_channels: 3, norm: FrozenBN, out_channels: 64} in_features: [res3, res4, res5] out_channels: 256 top_block: {target: detectron2.modeling.backbone.fpn.LastLevelP6P7, in_channels: 256, in_feature: p5, out_channels: ‘${…out_channels}’} focal_loss_alpha: 0.25 focal_loss_gamma: 2.0 head: target: detectron2.modeling.meta_arch.fcos.FCOSHead conv_dims: [256, 256, 256, 256] input_shape: - &id001 !!python/object/new:detectron2.layers.shape_spec.ShapeSpec [256, null, null, null] - *id001 - *id001 - *id001 - *id001 norm: GN num_classes: ${…num_classes} prior_prob: 0.01 head_in_features: [p3, p4, p5, p6, p7] num_classes: 10 pixel_mean: [103.53, 116.28, 123.675] pixel_std: [1.0, 1.0, 1.0] test_nms_thresh: 0.6 test_score_thresh: 0.2 optimizer: target: torch.optim.SGD lr: 0.01 momentum: 0.9 params: {target: detectron2.solver.get_default_optimizer_params, weight_decay_norm: 0.0} weight_decay: 0.0001 train: amp: {enabled: false} checkpointer: {max_to_keep: 100, period: 5000} ddp: {broadcast_buffers: false, find_unused_parameters: false, fp16_compression: false} device: cuda eval_period: 5000 init_checkpoint: ‘’ log_period: 20 max_iter: 1500 output_dir: /media/naeem/T7/trainers/fcos_R_50_FPN_1x.py/output/ [05/24 17:31:06 detectron2]: Full config saved to /media/naeem/T7/trainers/fcos_R_50_FPN_1x.py/output/config.yaml [05/24 17:31:06 detectron2]: Full config saved to /media/naeem/T7/trainers/fcos_R_50_FPN_1x.py/output/config.yaml [05/24 17:31:06 d2.utils.env]: Using a generated random seed 6677598 [05/24 17:31:06 d2.utils.env]: Using a generated random seed 6677598 [05/24 17:31:06 fvcore.common.checkpoint]: No checkpoint found. Initializing model from scratch WARNING [05/24 17:31:07 d2.data.datasets.coco]: Category ids in annotations are not in [1, #categories]! We’ll apply a mapping for you. WARNING [05/24 17:31:07 d2.data.datasets.coco]: Category ids in annotations are not in [1, #categories]! We’ll apply a mapping for you. [05/24 17:31:07 d2.data.datasets.coco]: Loaded 5000 images in COCO format from /…/…/…/…/Scientific Project/dataset/annotations/instancesAnimal_val2017.json [05/24 17:31:07 d2.data.datasets.coco]: Loaded 5000 images in COCO format from /…/…/…/…/Scientific Project/dataset/annotations/instancesAnimal_val2017.json [05/24 17:31:07 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances | |:----------😐:-------------|:----------😐:-------------|:----------😐:-------------| | bird | 427 | cat | 202 | dog | 218 | | horse | 272 | sheep | 354 | cow | 372 | | elephant | 252 | bear | 71 | zebra | 266 | | giraffe | 232 | | | | | | total | 2666 | | | | | [05/24 17:31:07 d2.data.build]: Distribution of instances among all 10 categories: | category | #instances | category | #instances | category | #instances | |:----------😐:-------------|:----------😐:-------------|:----------😐:-------------| | bird | 427 | cat | 202 | dog | 218 | | horse | 272 | sheep | 354 | cow | 372 | | elephant | 252 | bear | 71 | zebra | 266 | | giraffe | 232 | | | | | | total | 2666 | | | | | [05/24 17:31:07 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333)] [05/24 17:31:07 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333)] [05/24 17:31:07 d2.data.common]: Serializing 5000 elements to byte tensors and concatenating them all … [05/24 17:31:07 d2.data.common]: Serializing 5000 elements to byte tensors and concatenating them all … [05/24 17:31:07 d2.data.common]: Serialized dataset takes 2.98 MiB [05/24 17:31:07 d2.data.common]: Serialized dataset takes 2.98 MiB Traceback (most recent call last): File “D:\detectron2-master\detectron2-master\tools\deploy\export_lazy_config.py”, line 257, in <module> exported_model = export_scripting(torch_model) File “D:\detectron2-master\detectron2-master\tools\deploy\export_lazy_config.py”, line 103, in export_scripting ts_model = scripting_with_instances(ScriptableAdapter(), fields) File “c:\users\inaki\detectron2\detectron2\export\torchscript.py”, line 55, in scripting_with_instances scripted_model = torch.jit.script(model) File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_script.py”, line 1265, in script return torch.jit._recursive.create_script_module( File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_recursive.py”, line 454, in create_script_module return create_script_module_impl(nn_module, concrete_type, stubs_fn) File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_recursive.py”, line 516, in create_script_module_impl script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn) File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_script.py”, line 594, in _construct init_fn(script_module) File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_recursive.py”, line 494, in init_fn scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn) File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_recursive.py”, line 516, in create_script_module_impl script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn) File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_script.py”, line 594, in _construct init_fn(script_module) File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_recursive.py”, line 494, in init_fn scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn) File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_recursive.py”, line 516, in create_script_module_impl script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn) File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_script.py”, line 594, in _construct init_fn(script_module) File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_recursive.py”, line 494, in init_fn scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn) File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_recursive.py”, line 520, in create_script_module_impl create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs) File “C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch\jit_recursive.py”, line 371, in create_methods_and_properties_from_stubs concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults) RuntimeError: Module ‘ResNet’ has no attribute ‘_out_features’ (This attribute exists on the Python module, but we failed to convert Python type: ‘omegaconf.listconfig.ListConfig’ to a TorchScript type. Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type ListConfig… Its type was inferred; try adding a type annotation for the attribute.): File “c:\users\inaki\detectron2\detectron2\modeling\backbone\resnet.py”, line 446 outputs = {} x = self.stem(x) if “stem” in self._out_features: ~~~~~~~~~~~~~~~~~~ <— HERE outputs[“stem”] = x for name, stage in zip(self.stage_names, self.stages):

Your Environment

sys.platform win32 Python 3.10.3 | packaged by conda-forge | (main, Mar 28 2022, 05:19:17) [MSC v.1916 64 bit (AMD64)] numpy 1.22.3 detectron2 0.6 @c:\users\inaki\detectron2\detectron2 Compiler MSVC 192930141 CUDA compiler not available DETECTRON2_ENV_MODULE <not set> PyTorch 1.11.0 @C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torch PyTorch debug build False GPU available Yes GPU 0 NVIDIA GeForce RTX 3050 Laptop GPU (arch=8.6) Driver version 512.59 CUDA_HOME C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6 Pillow 9.0.1 torchvision 0.12.0 @C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torchvision torchvision arch flags C:\Users\Inaki.conda\envs\ScientificProject\lib\site-packages\torchvision_C.pyd; cannot find cuobjdump fvcore 0.1.5.post20220305 iopath 0.1.9 cv2 4.5.5

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:7

github_iconTop GitHub Comments

2reactions
dcy0577commented, Aug 21, 2022

Hi, any new updates on this? I got exactly the same error when trying to deploy the new mask-rcnn baseline with lazy config.

0reactions
ppwwyyxxcommented, May 25, 2022

FCOSHead.forward needs to have type annotation just like RetinaNetHead.forward.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Export to TorchScript - Hugging Face
Here, we explain how to export and use our models using TorchScript. Exporting a model requires two things: ... Padding can help fill...
Read more >
arcgis.learn module | ArcGIS API for Python
Exports the model in the specified framework format ('PyTorch', 'tflite' 'torchscript', and 'TF-ONXX' (deprecated)). Only models saved with the default ...
Read more >
Exporting MMDetection models to ONNX format - Medium
MMDetection is an open-source object detection toolbox based on PyTorch. This article explains how to export MMDetection models to ONNX ...
Read more >
TorchScript — PyTorch 1.13 documentation
TorchScript supports a subset of the tensor and neural network functions that PyTorch provides. Most methods on Tensor as well as functions in...
Read more >
TorchScript: Tracing vs. Scripting - Yuxin's Blog
Export : refers to the process that turns a model written in eager-mode Python code into a graph that describes the computation. ·...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found