Reference Points used for Arcface Alignment
See original GitHub issueHi,
why do you use the following points for image alignment when doing preprocessing for training with arcface loss?
arcface_src = np.array([ [38.2946, 51.6963], [73.5318, 51.5014], [56.0252, 71.7366], [41.5493, 92.3655], [70.7299, 92.2041] ], dtype=np.float32 )
It seems that first of all eyes and mouth points do not have the same y coordinates. Furthermore when doing mirroring along the y axis (which is a common data augmentation operation) the position of the landmarks changes slightly which could have a bad influence on training?
So where do these coordinates come from and why are they not designed such that the mirroring transformation does not change the landmark locations?
Kind regards,
Christian
Issue Analytics
- State:
- Created 3 years ago
- Reactions:3
- Comments:6
Top Results From Across the Web
Searching for Alignment in Face Recognition - Jianzhu Guo
For comparison, we map the widely-used 5-points tem- plate presented in ArcFace (Deng et al. 2019) to the pre- defined 300 × 300...
Read more >Questions abou alignment coordinate points · Issue #53 - GitHub
In face_align_demo.m,there are 5 coordinate points used for alignment,so the question is how you get the values of them?
Read more >How to pass the 5 landmarks of retinaface and perform face ...
So the face alignment custom model need 2 input layer? Or what's the use of the landmark, is it independent for the model...
Read more >The Elements of End-to-end Deep Face Recognition - arXiv
The indicated choices of alignment policy are different in number of used facial landmarks, cropping size of face image, and vertical shift. Among...
Read more >Searching for Alignment in Face Recognition
For comparison, we map the widely-used 5-points tem- plate presented in ArcFace (Deng et al. 2019) to the pre- defined 300 × 300...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
When the output image shall have size 224*224 the points used for alignment should be
2* np.array([ [38.2946, 51.6963], [73.5318, 51.5014], [56.0252, 71.7366], [41.5493, 92.3655], [70.7299, 92.2041] ], dtype=np.float32 )
Would be interesting to see how the choice of these alignment landmarks influences recognition performance, but I guess you need lots of computational resources to evaluate.It’s coming from another paper(git repo). Slight y-axis changes while doing left-right mirroring can bring some image-augmentation.