FaceLandmarks68: left and right eyes/brows are reversed.
See original GitHub issueHello,
FaceLandmarks68.getLeftEye()
returns points corresponding to the right eye, and FaceLandmarks68.getRightEye()
returns points corresponding to the left eye.
Same issue for the FaceLandmarks68.getLeftEyeBrow()
and FaceLandmarks68.getRightEyeBrow()
.
Issue Analytics
- State:
- Created 5 years ago
- Comments:10 (1 by maintainers)
Top Results From Across the Web
Cropping face using dlib facial landmarks - Stack Overflow
(I had to reverse the order of the eyebrows landmarks because the 68 landmarks aren't ordered to describe the face outline)
Read more >Dlib 68 points Face landmark Detection with OpenCV and ...
Conclusion: Dlib's 68-face landmark model shows how we can access the face features like eyes, eyebrows, nose, etc. But some times, we don't ......
Read more >Detect eyes, nose, lips, and jaw with dlib, OpenCV, and Python
Learn how to detect facial regions in an image, including eyes, eyebrows, nose, lips, and jaw using facial landmarks, dlib, OpenCV, ...
Read more >The face shape with 68 landmarks. - ResearchGate
In this paper, a comparative study is conducted to evaluate the efficiency of the state-of-the-art methods used for facial features detection. Seven fiducial ......
Read more >Face Morphing — A Step-by-Step Tutorial with Code
Left : Image I₀, Middle: Image I₁, Right: Cross-dissolve of two images ... Example use of dlib's facial landmarks (68 points) for the...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
By default source images and source video streams come non-mirrored, so it should be a baseline. If I need to flip photo/video, I use CSS on the whole canvas (e.g.
transform: scaleX(-1);
) instead of processing the source frame, which is much cheaper, especially for a web camera video stream.But here is a catch: if I mirror the video in a browser using CSS , then I cannot use the
drawDetection()
anymore because score text under the bounded box becomes mirrored too and not readable. It is a non-issue for me as I draw a custom face boundary anyway, so just letting you know 😃 But if you decide to makedrawDetection()
support drawing over a mirrored representation and flip text labels accordingly, then you need a parameter here.My case is a face liveness check based on certain statistic of relative movements of landmark points in a video stream. Before making the check, I validate the face geometry to ensure it is not distorted due to fast movement, glasses etc (which sometime happens), so these garbled “Picasso-like” frames do not affect the statistics. As part of this sanity check I verify that the right eye is correctly positioned relative to the left eye. The assumption is that in a normal (non-mirrored) photo/video frame the subjects’ right eye will be closer to the left border of the frame than the left one. But the
FaceLandmarks68
does not support this assumption, my code was not working and I had to debug to find out that method names are misleading.