Recognition with KNN is not really good
See original GitHub issueHi Insightface team, thank you for sharing the code. I have tried to develop a recoginition demo using KNN. But I found the results is not good as I have done with face_recoginition of Dlib. The distance between faces and those in the model trained by archface is alway higher than face_recognition lib (mean that less confidence). Could you pls help me to review if I did smt wrong ? Here is the code I use to train :
`
X = []
y = []
parser = argparse.ArgumentParser(description='face model test')
parser.add_argument('--image-size', default='112,112', help='')
parser.add_argument('--model', default='./deploy/model-r100-ii/model,0', help='path to load model.')
parser.add_argument('--ga-model', default='./deploy/gamodel-r50/model,0', help='path to load model.')
parser.add_argument('--gpu', default=0, type=int, help='gpu id')
parser.add_argument('--det', default=0, type=int,
help='mtcnn option, 1 means using R+O, 0 means detect from begining')
parser.add_argument('--flip', default=0, type=int, help='whether do lr flip aug')
parser.add_argument('--threshold', default=1.24, type=float, help='ver dist threshold')
args = parser.parse_args()
model = face_model.FaceModel(args)
today = date.today().strftime("%d%m%Y")
# Loop through each person in the training set
i = 0;
for class_dir in os.listdir(train_dir):
i += 1
print(str(i) + '------------------' + class_dir)
if not os.path.isdir(os.path.join(train_dir, class_dir)):
continue
# Loop through each training image for the current person
for img_path in image_files_in_folder(os.path.join(train_dir, class_dir)):
img = cv2.imread(img_path)
img = model.get_input(img)
print('Running inference... %s', img_path)
if img is None or len(img) == 0 :
print('Cannot calculate ', img_path)
continue
start = time.time()
_face_description = model.get_feature(img)
print("Face extract took {} seconds.".format(time.time() - start))
X.append(_face_description)
y.append(class_dir)
clf = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors, algorithm=knn_algo, weights='distance')
clf.fit(X, y)
return clf
` and the code to recognize is following:
```
video_capture = cv2.VideoCapture(video_path + filename)
ret, frame = video_capture.read()
img = self.face_describer.get_input(frame)
_face_description = self.face_describer.get_feature(img)
_face_description = np.expand_dims(_face_description, axis=0)
# Use the KNN model to find the best matches for the test face
closest_distances = clf.kneighbors(_face_description, n_neighbors=5)
are_matches = [closest_distances[0][i][0] <= distance_threshold for i in range(len(closest_distances[0]))]
# Predict classes and remove classifications that aren't within the threshold
return [(closest_distances[0][0][0], pred, loc) if rec else (closest_distances[0][0][0], "unknown", loc) for pred, loc, rec in zip(clf.predict(_face_description), X_face_locations, are_matches)]
Thanks.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:3
- Comments:7
Top Results From Across the Web
The KNN Algorithm - Explanation, Opportunities, Limitations
This technique may seem a bit counterintuitive and not trustworthy at first, but it's actually very reliable. It's popular in many fields, ...
Read more >K-Nearest Neighbors. All you need to know about KNN.
KNN classifier does not have any specialized training phase as it uses all the training samples for classification and simply stores the results ......
Read more >KNN: Failure cases, Limitations, and Strategy to Pick the Right K
KNN is a very powerful algorithm. It is also called “lazy learner”. However, it has the following set of limitations: 1. Doesn't work...
Read more >k-nearest neighbors for handwriting recognition
KNN is a very simple algorithm to understand and implement. At the same time, it offers surprisingly high efficiency in practical applications.
Read more >A Quick Introduction to KNN Algorithm - Great Learning
You expect little to no explicit training phase, · The training phase is pretty fast, · KNN keeps all the training data since...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
If you use l2, you should normalize embedding first and threshold for distance is usually high ( about 1.0). If you use cosine, the threshold is smaller (about 0.5)
Vào Th 6, 19 thg 7, 2019 vào lúc 08:24 khanhnt notifications@github.com đã viết:
@khanhnt did you solve this using cosine as @tranvanhoa533 said if yes can you share your feedback on it. @tranvanhoa533 are you saying about cosine distance?