question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Support for Batch Inference

See original GitHub issue

Hi and Thanks for the great package.

I wanted to ask if there is any support for inferring Landmarks of a batch of images? Each fa.get_landmarks(image) takes a really long time compared to something like DLib’s landmark detector even on a GPU and I was hoping I could use batch processing to process a batch of images at once and remedy this issue a bit.

Thanks again.

Issue Analytics

  • State:open
  • Created 5 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
1adrianbcommented, Nov 30, 2018

Sure, we can add as an optional paramter the batch size, with the default set to 1

0reactions
Neltherioncommented, May 10, 2019

@AmirSh15 Not really. This is a real problem for lots of Face Alignment + Face Detection libraries out there: They don’t support batch inference properly!

Read more comments on GitHub >

github_iconTop Results From Across the Web

New — Introducing Support for Real-Time and Batch ...
I'm pleased to share that you can now deploy data preparation flows from SageMaker Data Wrangler for real-time and batch inference. This feature ......
Read more >
Batch Inference vs Online Inference
Batch inference, or offline inference, is the process of generating predictions on a batch of observations. The batch jobs are typically ...
Read more >
Machine learning inference during deployment
Implement batch inference: Azure supports multiple features for batch inference. One feature is ParallelRunStep in Azure Machine Learning, ...
Read more >
3. Batch Inference with TorchServe
TorchServe was designed to natively support batching of incoming inference requests. This functionality enables you to use your host resources optimally, ...
Read more >
Improve Inference Efficiency with Batch Inference
Result queue: After the inference done, inference worker sends the result to the result queue; the front-end service listens to the queue and...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found