question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Server start stuck when loading python model instantiating certain transformers model

See original GitHub issue

Description When startimg triton server with a python model which instantiates certain hugging face transformers models during initialize() the server gets stuck. Namely I tried it with this checkpoint. Since it works if I use this checkpoint instead, I guess the Problem could be the model size? I also experienced a similiar Problem when using the aforementioned working checkpoint but also loading other models (onnx). Could this be related to memory/gpu settings of the docker container?

Triton Information nvcr.io/nvidia/tritonserver:22.04-py3

To Reproduce env:

conda create --name triton-transformers python=3.8.10
conda activate triton-transformers
conda install -c huggingface transformers==4.14.1 tokenizers==0.10.3
export PYTHONNOUSERSITE=True
comda-pack

model:

...
from transformers import AutoModelForTokenClassification,

class TritonPythonModel:
    def initialize(self, args):
        self.checkpoint = r"/checkpoints/tner-xlm-roberta-large-multiconer-multi/"
        self.ner_model = AutoModelForTokenClassification.from_pretrained(self.checkpoint)
        print('Initialized...')

    def execute(self, requests):
        ...

config:

backend: "python"
max_batch_size: 8
input [
  {
    name: "sequence"
    data_type: TYPE_STRING
    dims: [ -1, -1 ]
  }
]
output [
  {
    name: "entities"
    data_type: TYPE_STRING
    dims: [ -1, -1 ]
  }
]
parameters: [
  {
    key: "EXECUTION_ENV_PATH"
    value: { string_value: "/envs/triton-transformers.tar.gz"}
  }
]

Expected behavior Triton Server should just start.

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:11 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
AJHoehcommented, Jun 27, 2022

Thanks at both of you!

1reaction
dyastremskycommented, Jun 22, 2022

Thanks for the detailed post and clear reproduction instructions. It’s possible and a quick test of that would be increasing the GPUs/memory available to the container, if possible. To help find the cause more quickly, would you be able to run it with the --log-verbose=1 flag and provide the verbose logs?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Troubleshoot - Hugging Face
Troubleshoot. Sometimes errors occur, but we are here to help! This guide covers some of the most common issues we've seen and how...
Read more >
program stucks when running transformers example code
I have some problem when trying to run the example code from ... from transformers import * # Load dataset, tokenizer, model from...
Read more >
Ray Tune FAQ — Ray 2.2.0 - the Ray documentation
Why is my training stuck and Ray reporting that pending actor or tasks ... If your model is small, you can usually try...
Read more >
In Huggingface transformers, resuming training with the same ...
... class with resume_from_checkpoint=MODEL and resumed the training. ... Trainer( model=model, # the instantiated Transformers model to ...
Read more >
Accelerating Inference in TensorFlow with TensorRT User Guide
The following is a complete Python example starting from model definition ... import trt_convert as trt # Instantiate the TF-TRT converter converter =...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found