Support for Batch InferenceSee original GitHub issue
Hi and Thanks for the great package.
I wanted to ask if there is any support for inferring Landmarks of a batch of images? Each
fa.get_landmarks(image) takes a really long time compared to something like DLib’s landmark detector even on a GPU and I was hoping I could use batch processing to process a batch of images at once and remedy this issue a bit.
- Created 4 years ago
- Comments:5 (2 by maintainers)
Top GitHub Comments
Sure, we can add as an optional paramter the batch size, with the default set to 1