question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Supply a requirements file with the model-archive.

See original GitHub issue

version


torch                1.5.1
torch-model-archiver 0.2.0b20200731
torchserve           0.1.1
torchtext            0.6.0
torchvision          0.6.0a0+35d732a

export command


 torch-model-archiver --model-name craft --version 1.0 --model-file ./craft.py  --serialized-file ./model/craft_mlt_25k.pth  --handler  ./craft_handler.py -r requirements.txt

requirements.txt


-i https://pypi.tuna.tsinghua.edu.cn/simple
pillow
opencv_python==3.4.1.15

docerk command


docker run --rm -it \                                                                                                              
           -p 8080:8080 -p 8081:8081 \
           --name torch_serve_cpu \
           -v (pwd)/model_store:/home/model-server/model-store \
           pytorch/torchserve \
           torchserve --start --ncs --model-store model-store --models craft.mar

logs


2020-07-31 08:22:18,079 [INFO ] W-9003-craft_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle -     import cv2
2020-07-31 08:22:18,079 [INFO ] W-9003-craft_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ModuleNotFoundError: No module named 'cv2'

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:3
  • Comments:15 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
IamMohitMcommented, Jun 13, 2022

Actually, it worked at some later point with a different installation (tensorflow) in the requirements file. Haven’t tested with opencv as shown above. Used the --requirements-file option instead of -r

1reaction
festinaiscommented, Jun 1, 2022

@IamMohitM can you try with the command: --requirements-file instead of -r. For more details: https://github.com/pytorch/serve/tree/master/model-archiver#torch-model-archiver-command-line-interface.

I had the same issue and I tried with --extra-files and -r, it didn’t work. With --requirements-file it works!

Read more comments on GitHub >

github_iconTop Results From Across the Web

6. Custom Service — PyTorch/Serve master documentation
The custom handler file must define a module level function that acts as an entry point for execution ... Supply a requirements file...
Read more >
model-archiver/README.md · wqw547243068/serve - Gitee.com
The following information is required to create a standalone model archive: Model name; Model file; Serialized file. Installation. Install torch-model-archiver ...
Read more >
The Python Requirements File and How to Create it
It is a simple text file that saves a list of the modules and packages required by your project. By creating a Python...
Read more >
Use TensorFlow with the SageMaker Python SDK
Include a requirements.txt file in the same directory as your training script. ... provide the accelerator type to accelerator_type to your deploy call....
Read more >
Deploying PyTorch models for inference at scale using ...
TorchServe uses a model archive format with the extension .mar. A .mar file packages model checkpoints or model definition file with ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found