Load Multiple Model and Execution simultaneously
See original GitHub issueIssue Description
I have 2 queries :
- Is there any way to load multiple models and transform mleap frame over multiple models. Because right now, I can load any one model and transform a mleap frame over that model at a given moment.
Scenario would be:
Loading two models :
curl -XPUT -H "content-type: application/json" -d '{"path":"/models/airbnb.model.lr-0.6.0-SNAPSHOT.zip"}' http://172.17.0.1:65327/model
curl -XPUT -H "content-type: application/json" -d '{"path":"/models/airbnb.model.rf-0.6.0-SNAPSHOT.zip"}' http://172.17.0.1:65327/model
Now try to transform leap frame over any model :
curl -v -XPOST -H "accept: application/json" \
-H "content-type: application/json" \
-d @/models/frame.airbnb.json \
http://172.17.0.1:65327/transform
But i couldnt give any parameter while transforming mleap frame to select model. Is there any provision to do so?
- Can I transform the whole bunch of mleap frames (Eg. 100) in a single request?
Looking forward for your reply.
Issue Analytics
- State:
- Created 6 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Training multiple machine learning models and running data ...
Some applications for multiple model parallel training are: Hyper-parameter tuning: For the same training data set, simultaneously train ...
Read more >executing multiple models in tensorflow with a single session
I'm trying to run several models of neural networks in tensorflow in parallel, each model is independent of the rest.
Read more >How to run 2 models together in parallel with the ... - GitHub
I am trying to speed up this training process to run the 2 models at the same time using multiple devices (e.g., loading...
Read more >Saving and loading multiple models in one file using PyTorch
To load the models, first initialize the models and optimizers, then load the dictionary locally using torch.load() . From here, you can easily...
Read more >Concurrent Inference. Getting multiple models to run on the…
Batch multiple requests and execute them using a single model instance ... This is what occurs when a model is trained, the Dataloader...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@Bond-OO7 Sorry, I missed this comment. We are about to release 0.8.0 and for 0.9.0 we want to make major improvements to the model servers, including:
We don’t have plans to add in parallel execution into the core MLeap engine itself, but loading multiple models into the server should let your application do this.
We’ve released the new model serving services with MLeap 0.13 earlier in the year, please let us know if you have any questions regarding that.