question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to transfer files to a custom handler with curl command

See original GitHub issue

I have created a custom handler that inputs and outputs wav files. The code is as follows

# custom handler file

# model_handler.py

"""
ModelHandler defines a custom model handler.
"""
import os
import soundfile
from espnet2.bin.enh_inference import *

from ts.torch_handler.base_handler import BaseHandler

class ModelHandler(BaseHandler):
    """
    A custom model handler implementation.
    """

    def __init__(self):
        self._context = None
        self.initialized = False
        self.model = None
        self.device = None

    def initialize(self, context):
        """
        Invoke by torchserve for loading a model
        :param context: context contains model server system properties
        :return:
        """

        #  load the model
        self.manifest = context.manifest

        properties = context.system_properties
        model_dir = properties.get("model_dir")
        self.device = torch.device("cuda:" + str(properties.get("gpu_id")) if torch.cuda.is_available() else "cpu")

        # Read model serialize/pt file
        serialized_file = self.manifest['model']['serializedFile']
        model_pt_path = os.path.join(model_dir, serialized_file)

        if not os.path.isfile(model_pt_path):
            raise RuntimeError("Missing the model.pt file")

        self.model = SeparateSpeech("./train_enh_transformer_tf.yaml", "./valid.loss.best.pth", normalize_output_wav=True)

        self.initialized = True

    def preprocess(self,data):
        audio_data, rate  = soundfile.read(data)
        preprocessed_data = audio_data[np.newaxis, :]

        return preprocessed_data

    def inference(self, model_input):
        model_output = self.model(model_input)
        return model_output

    def postprocess(self, inference_output):
        """
        Return inference result.
        :param inference_output: list of inference output
        :return: list of predict results
        """
        # Take output from network and post-process to desired format
        postprocess_output = inference_output
        #wav ni suru
        return postprocess_output

    def handle(self, data, context):
        model_input = self.preprocess(data)
        model_output = self.inference(model_input)
        return self.postprocess(model_output)

I transferred the wav file to torhserve with the following command

curl --data-binary @Mix.wav --noproxy ‘*’ http://127.0.0.1:8080/predictions/denoise_transformer -v

However, I got the following response

*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> POST /predictions/denoise_transformer HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.58.0
> Accept: */*
> Content-Length: 128046
> Content-Type: application/x-www-form-urlencoded
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 500 Internal Server Error
< content-type: application/json
< x-request-id: 445155a4-5971-490a-ba7c-206f8eda5ea0
< Pragma: no-cache
< Cache-Control: no-cache; no-store, must-revalidate, private
< Expires: Thu, 01 Jan 1970 00:00:00 UTC
< content-length: 89
< connection: close
<
{
  "code": 500,
  "type": "ErrorDataDecoderException",
  "message": "Bad end of line"
}
* Closing connection 0

What is wrong?

I have confirmed that the following command returns the response.

curl --noproxy ‘*’ http://127.0.0.1:8081/models

{
  "models": [
    {
      "modelName": "denoise_transformer",
      "modelUrl": "denoise_transformer.mar"
    }
  ]
}

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
Shin-ichi-Takayamacommented, Aug 30, 2022

Thank you for your response. I was able to get it to work by modifying the custom handler as follows

# custom handler file

# model_handler.py

"""
ModelHandler defines a custom model handler.
"""
from typing import Dict, List, Tuple
import io
import os
import wave
import array
import soundfile
from scipy.io.wavfile import write
from espnet2.bin.enh_inference import *

from ts.torch_handler.base_handler import BaseHandler

class ModelHandler(BaseHandler):
    """
    A custom model handler implementation.
    """

    def __init__(self):
        self._context = None
        self.initialized = False
        self.model = None
        self.device = None

    def initialize(self, context):
        """
        Invoke by torchserve for loading a model
        :param context: context contains model server system properties
        :return:
        """

        #  load the model
        self.manifest = context.manifest

        properties = context.system_properties
        model_dir = properties.get("model_dir")
        self.device = torch.device("cuda:" + str(properties.get("gpu_id")) if torch.cuda.is_available() else "cpu")

        # Read model serialize/pt file
        serialized_file = self.manifest['model']['serializedFile']
        model_pt_path = os.path.join(model_dir, serialized_file)

        if not os.path.isfile(model_pt_path):
            raise RuntimeError("Missing the model.pt file")

        self.model = SeparateSpeech("./train_enh_transformer_tf.yaml", "./valid.loss.best.pth", normalize_output_wav=True)

        self.initialized = True

    def preprocess(self,data):
        wav_data = data[0]['body']
        audio_data, rate  = soundfile.read(io.BytesIO(wav_data))
        preprocessed_data = audio_data[np.newaxis, :]

        return preprocessed_data
    def inference(self, model_input):
        model_output = self.model(model_input)
        return model_output

    def postprocess(self, inference_output):
        """
        Return inference result.
        :param inference_output: list of inference output
        :return: list of predict results
        """
        # Take output  network and post-process to desired format
        postprocess_output = inference_output
        return postprocess_output

    def handle(self, data, context):
        temp_file_name = '/tmp/temp.wav'

        model_input = self.preprocess(data)
        model_output = self.inference(model_input)
        output = self.postprocess(model_output)
        scaled = np.int16(output[0][0] * 32767)
        write(temp_file_name, 16000, scaled)
        with open(temp_file_name, 'rb') as f:
            out = f.read()
        #os.remove(temp_file_name)

        return [out]


1reaction
msaroufimcommented, Aug 30, 2022

Did anything wrong happen if you convert data to 16 bit from your handler?

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to send file contents as body entity using cURL
I am using cURL command line utility to send HTTP POST to a web service. I want to include a file's contents as...
Read more >
How do I post a file using Curl? - ReqBin
To post a file with Curl, use the -d or -F command-line options and start the data with the @ symbol followed by...
Read more >
Uploading Data with Index Handlers - Apache Solr
Using curl to Perform Updates. You can use the curl utility to perform any of the above commands, using its --data-binary option to...
Read more >
Custom handler request data #529 - pytorch/serve - GitHub
I'm using curl POST upload two images, and running handle function in torch serve. I found POST requests cost a large part of...
Read more >
Send Requests to Endpoints - Hugging Face
To use your cURL command as code, use the cURL Converter tool to quickly get started with the programming language of your choice....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found