question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Segmentation fault (core dumped) on ensemble model from Triton (GPU) to Python Backend (CPU)

See original GitHub issue

Description Error in ensemble model when postprocessing python backend on cpu from tensorrt GPU input “Segmentation fault (core dumped)”

However, if I send the output from tensorrt to the client, and the resend back to the server it perfectly works. It seems to be something reported in previous issues and not completely solved, at least for tritonserver in jetpack 4.4:

https://github.com/triton-inference-server/python_backend/pull/30

Triton Information triton server 2.6 for jetson. I have built the last python backend which includes this last PR:

https://github.com/triton-inference-server/python_backend/pull/30

Are you using the Triton container or did you build it yourself? I built myself .

To Reproduce

The config.pbtx

name: "ensemblePMG_basque"
platform: "ensemble"
input [
  {
    name: "INPUT0"
    data_type: TYPE_FP32
    dims: [ 1, -1 ]
  }
]

output [
  {
    name: "OUTPUT0"
    data_type: TYPE_STRING
    dims: [ -1 ]
  }
]

ensemble_scheduling {
  step [
    {
      model_name: "preprocess"
      model_version: -1
      input_map {
        key: "INPUT0"
        value: "INPUT0"
      }
      output_map {
        key: "FEATURES"
        value: "FEATURES"
      }
    },
    {
      model_name: "BasqueQ10x5"
      model_version: -1
      input_map {
        key: "FEATURES"
        value: "FEATURES"
      }
      output_map {
        key: "LOGITS"
        value: "LOGITS"
      }
    },
    {
      model_name: "greedy"
      model_version: -1
      input_map {
        key: "LOGITS"
        value: "LOGITS"
      }
      output_map {
        key: "OUTPUT0"
        value: "OUTPUT0"
      }
    }
  ]
}

Last logs before fail :

I0201 09:40:12.665871 14297 infer_request.cc:502] prepared: [0x0x7ee00840f0] request id: 1, model: BasqueQ10x5, requested version: -1, actual version: 1, flags: 0x0, correlation id: 0, batch size: 0, priority: 0, timeout (us): 0
original inputs:
[0x0x7ee0003968] input: FEATURES, type: FP32, original shape: [1,64,464], batch + shape: [1,64,464], shape: [1,64,464]
override inputs:
inputs:
[0x0x7ee0003968] input: FEATURES, type: FP32, original shape: [1,64,464], batch + shape: [1,64,464], shape: [1,64,464]
original requested outputs:
LOGITS
requested outputs:
LOGITS


I0201 09:40:12.665983 14297 python.cc:926] TRITONBACKEND_ModelInstanceExecute: model instance name preprocess_0 released 1 requests
I0201 09:40:12.666197 14297 plan_backend.cc:2322] Running BasqueQ10x5_0_0_gpu0 with 1 requests
I0201 09:40:12.666463 14297 plan_backend.cc:3207] Optimization profile default [0] is selected for BasqueQ10x5_0_0_gpu0
I0201 09:40:12.666887 14297 plan_backend.cc:2721] Context with profile default [0] is being executed for BasqueQ10x5_0_0_gpu0
I0201 09:40:12.689471 14297 infer_response.cc:139] add response output: output: LOGITS, type: FP32, shape: [1,232,37]
I0201 09:40:12.689572 14297 ensemble_scheduler.cc:509] Internal response allocation: LOGITS, size 34336, addr 0xf00e80000, memory type 2, type id 0
I0201 09:40:13.047937 14297 ensemble_scheduler.cc:524] Internal response release: size 34336, addr 0xf00e80000
I0201 09:40:13.048183 14297 infer_request.cc:502] prepared: [0x0x7f34038470] request id: 1, model: greedy, requested version: -1, actual version: 1, flags: 0x0, correlation id: 0, batch size: 0, priority: 0, timeout (us): 0
original inputs:
[0x0x7f3400eca8] input: LOGITS, type: FP32, original shape: [1,232,37], batch + shape: [1,232,37], shape: [1,232,37]
override inputs:
inputs:
[0x0x7f3400eca8] input: LOGITS, type: FP32, original shape: [1,232,37], batch + shape: [1,232,37], shape: [1,232,37]
original requested outputs:
OUTPUT0
requested outputs:
OUTPUT0

I0201 09:40:13.048582 14297 pinned_memory_manager.cc:158] pinned memory deallocation: addr 0x101070090
Segmentation fault (core dumped)

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:10 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
ivangtorrecommented, Feb 3, 2021

@Tabrizian I will work in a minimal example and upload it on the following days. Thanks for the support

0reactions
Tabriziancommented, Mar 16, 2021

Closing. Reopen if you still saw the issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

SSD Model, TRT, DeepStream, Triton: SegFault
I'm trying to export this TF model to TF-TRT, and run it with DeepStream ... ./app1-jetson.sh: line 3: 26 Segmentation fault (core dumped) ......
Read more >
Segmentation fault (core dumped) when I save the trained ...
I am trying to save my Rf model after training it and I get a "Segmentation fault (core dumped)". I have tried to...
Read more >
Aobortedcore Dumped When Inferenceing Tensorrt Engine - ADocLib
Triton Inference Server is NVIDIA's ML model server.Although Triton runs on both CPUs ... I have built the last python backend which includes...
Read more >
Blogs | Dell Technologies Info Hub
Graphics Processing Units (GPUs) provide exceptional acceleration to power modern Artificial Intelligence (AI) and Deep Learning (DL) workloads. GPU resource ...
Read more >
Simple index - piwheels
... backend-ai-console-server simple-virtuoso-migrate sql-schema-builder apitest365 messi-nmr kl-audit-supportv1-3 soufi delicious-nbdev geokey-webresources ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found