Questions on using DeepFace.find() but with a custom face-detection
See original GitHub issueFor my use case (a live-cam-analysis-setting) I have a face detection model already, but still need a good face-recognition model and I’m using DeepFace.find():
DeepFace.find(face_image, db_path = "./train_cut_faces/", model_name = model_name, model=model, enforce_detection = False, silent =True)
as “model” [“VGG-Face”, “Facenet”, “Facenet512”, “OpenFace”, “DeepFace”, “DeepID”, “ArcFace”, “Dlib”, “SFace”] are used (I want to test them all for my usecase)
and have some questions:
1.) When using enforce_detection = False
, do the images in db_path
should only contain the face? (I guess this must be true)
2.) The creation of my face-database is done in a different script (lets call it training.py), so, do I need to somehow (pre-)process the faces (for example with cv2.resize(face_image, (224, 224))
or normalize/standardize the faces in a certain way?
3.) If 2.) is not needed, should the stored faces have a minimum/maximum pixel-size? The face detection algorithm used can detect faces up to a few meters from the camera and some faces have like 20x40 pixels
4.) When DeepFace.find() is called the first time it creates a representations_deeface.pkl
file with the embeddings of the faces in db_path
(as far as I understand) and reuses the .pkl if it was already created. In my usecase the user can add persons/faces and the name to the database at will, is it possible to somehow “update” the .pkl?
5.) What is in your opinion the “best” “middle-ground” model for a live-analysis setting in terms of accuracy and speed? Due to this setting at least a few FPS is desirable.
6.) Does the choice of the distance_metric function has a significant impact on the recognized faces?
Thank you for your time!
Issue Analytics
- State:
- Created a year ago
- Comments:5 (2 by maintainers)
@serengil Regarding 4): I have looked through the code and found the part where the embeddings are calculated and the .pkl file is stored. I put it all into a function and it seems to do the job. The only thing I added was the removal of the .pkl file right before saving it again with the new embeddings:
Refer to point 3 by you: when i run this, at times i get different person, as Identified. I am extracting face images by using MTCNN() and then calculating its vector (VGG-Face) . check the verificiation result of first 5 images.