cache per-model dependency in model archiver
See original GitHub issue🚀 The feature
Correct me if I’m wrong: Currently torchserve supports per-model dependency where user can specify --requirements-file
when running torch-model-archiver
. and when model gets loaded, torchserve creates some sort of python venv to install these extra packages.
To reduce the wait time, is it possible to cache the dependency packages somehow so that when the model gets loaded, it can be ready for serving (almost) immediately without waiting for several minutes to install dependencies?
Motivation, pitch
see above
Alternatives
No response
Additional context
No response
Issue Analytics
- State:
- Created a year ago
- Comments:6 (2 by maintainers)
Top Results From Across the Web
Torch Model archiver for TorchServe - GitHub
A key feature of TorchServe is the ability to package all model artifacts into a single model archive file. It is a separate...
Read more >PyTorch - KServe Documentation Website
TorchServe provides a utility to package all the model artifacts into a single Torchserve Model Archive Files (MAR). You can store your model...
Read more >6. Custom Service — PyTorch/Serve master documentation
Custom handlers. Creating model archive with entry point. Handling model execution on GPU. Installing model specific python dependencies ...
Read more >Release Notes for OpenVINO™ toolkit v.2022.2 - Intel
Implemented model caching support. The feature allows to significantly improve first inference latency. Improved inference performance for non-vision use cases ...
Read more >Deploying PyTorch models for inference at scale using ...
You can use the torch-model-archiver tool in TorchServe to create a .mar ... number of workers per model, describing the status of a...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
thanks this is really helpful! will try!
actually, one question related to this:
suppose model specified
my_package
in requirements.in, but it is already installed in base image(aka. the requirement is already satisfied), willmy_package
re-installed for this model? or will skip and use this package in base image ?