question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

low test accuracy

See original GitHub issue

Hi, I use your ArcFace to train and test FashionMnist dataset. According to your guildline, the test accuracy is less than 0.1. Your test code seems not work very well.

Can you provide a copy of model test code which can achieve good test accuracy? Thanks.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:11 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
4uiiurz1commented, Sep 18, 2019

@Liang-yc

Sry! I misunderstood! Plz try this code:

W = arcface_model.get_layer('output').W
arcface_model = Model(inputs=arcface_model.input[0], outputs=arcface_model.layers[-3].output)
arcface_features = arcface_model.predict(X1i[0], verbose=0)
arcface_features /= np.linalg.norm(arcface_features, axis=1, keepdims=True)
yTrue = np.argmax(X1i[1], axis=1)
yPred = np.argmax(K.eval(arcface_features @ W), axis=1)
accuracy = metrics.accuracy_score(yTrue, yPred) * 100
error = 100 - accuracy
print(yTrue, yPred, np.max(yPred))
print("Accuracy : ", accuracy)
print("Error : ", error)
0reactions
almoghitelmancommented, Mar 15, 2021

Hi, I tried to implement the code above in order to get prediction on test image, but the (K.eval(arcface_features @ W)) returns a matrix with negative values which don’t sum to 1. the all training process is build as same as provided here. As I understood the predicted matrix values are prob which sum to 1. is it correct? where is the problem in my code? results and model attached.

Thanks!

Model: “model_1”


Layer (type) Output Shape Param # Connected to

input_1 (InputLayer) (None, 112, 112, 3) 0


conv2d_1 (Conv2D) (None, 112, 112, 16) 448 input_1[0][0]


batch_normalization_1 (BatchNor (None, 112, 112, 16) 64 conv2d_1[0][0]


activation_1 (Activation) (None, 112, 112, 16) 0 batch_normalization_1[0][0]


conv2d_2 (Conv2D) (None, 112, 112, 16) 2320 activation_1[0][0]


batch_normalization_2 (BatchNor (None, 112, 112, 16) 64 conv2d_2[0][0]


activation_2 (Activation) (None, 112, 112, 16) 0 batch_normalization_2[0][0]


max_pooling2d_1 (MaxPooling2D) (None, 56, 56, 16) 0 activation_2[0][0]


conv2d_3 (Conv2D) (None, 56, 56, 32) 4640 max_pooling2d_1[0][0]


batch_normalization_3 (BatchNor (None, 56, 56, 32) 128 conv2d_3[0][0]


activation_3 (Activation) (None, 56, 56, 32) 0 batch_normalization_3[0][0]


conv2d_4 (Conv2D) (None, 56, 56, 32) 9248 activation_3[0][0]


batch_normalization_4 (BatchNor (None, 56, 56, 32) 128 conv2d_4[0][0]


activation_4 (Activation) (None, 56, 56, 32) 0 batch_normalization_4[0][0]


max_pooling2d_2 (MaxPooling2D) (None, 28, 28, 32) 0 activation_4[0][0]


conv2d_5 (Conv2D) (None, 28, 28, 64) 18496 max_pooling2d_2[0][0]


batch_normalization_5 (BatchNor (None, 28, 28, 64) 256 conv2d_5[0][0]


activation_5 (Activation) (None, 28, 28, 64) 0 batch_normalization_5[0][0]


conv2d_6 (Conv2D) (None, 28, 28, 64) 36928 activation_5[0][0]


batch_normalization_6 (BatchNor (None, 28, 28, 64) 256 conv2d_6[0][0]


activation_6 (Activation) (None, 28, 28, 64) 0 batch_normalization_6[0][0]


max_pooling2d_3 (MaxPooling2D) (None, 14, 14, 64) 0 activation_6[0][0]


conv2d_7 (Conv2D) (None, 14, 14, 128) 73856 max_pooling2d_3[0][0]


batch_normalization_7 (BatchNor (None, 14, 14, 128) 512 conv2d_7[0][0]


activation_7 (Activation) (None, 14, 14, 128) 0 batch_normalization_7[0][0]


conv2d_8 (Conv2D) (None, 14, 14, 128) 147584 activation_7[0][0]


batch_normalization_8 (BatchNor (None, 14, 14, 128) 512 conv2d_8[0][0]


activation_8 (Activation) (None, 14, 14, 128) 0 batch_normalization_8[0][0]


max_pooling2d_4 (MaxPooling2D) (None, 7, 7, 128) 0 activation_8[0][0]


conv2d_9 (Conv2D) (None, 7, 7, 256) 295168 max_pooling2d_4[0][0]


batch_normalization_9 (BatchNor (None, 7, 7, 256) 1024 conv2d_9[0][0]


activation_9 (Activation) (None, 7, 7, 256) 0 batch_normalization_9[0][0]


conv2d_10 (Conv2D) (None, 7, 7, 256) 590080 activation_9[0][0]


batch_normalization_10 (BatchNo (None, 7, 7, 256) 1024 conv2d_10[0][0]


activation_10 (Activation) (None, 7, 7, 256) 0 batch_normalization_10[0][0]


max_pooling2d_5 (MaxPooling2D) (None, 3, 3, 256) 0 activation_10[0][0]


batch_normalization_11 (BatchNo (None, 3, 3, 256) 1024 max_pooling2d_5[0][0]


dropout_1 (Dropout) (None, 3, 3, 256) 0 batch_normalization_11[0][0]


flatten_1 (Flatten) (None, 2304) 0 dropout_1[0][0]


dense_1 (Dense) (None, 128) 295040 flatten_1[0][0]


batch_normalization_12 (BatchNo (None, 128) 512 dense_1[0][0]


input_2 (InputLayer) (None, 47) 0


arc_face_1 (ArcFace) (None, 47) 6016 batch_normalization_12[0][0]
input_2[0][0]

x_new = np.expand_dims(x_new, axis=0)
x_new = np.array(x_new, dtype=np.float32) / 255.0

arcface_model = load_model(Params['model_path'], custom_objects={'ArcFace': ArcFace})
arcface_model.summary()
W = arcface_model.get_layer('arc_face_1').W
#W = arcface_model.get_weights
arcface_model = Model(inputs=arcface_model.input[0], outputs=arcface_model.layers[-3].output)
arcface_features = arcface_model.predict(x_new, verbose=0)
arcface_features /= np.linalg.norm(arcface_features, axis=1, keepdims=True)
print(W)
print(arcface_features)
#yTrue = np.argmax(X1i[1], axis=1)
calc = K.eval(arcface_features @ W)
yPred = np.argmax(calc, axis=1)
print(calc)

calc matrix: [[-1.3757099 -1.3127273 -1.0342306 -1.052017 -1.3646903 -0.99030894 -1.0056239 -0.9335255 -1.0159407 -0.9508162 -0.87866336 -1.1595434 -1.0569867 -1.0423982 -1.1982433 -1.1383088 -0.95849425 -1.0996643 -0.96682745 -0.96783435 -1.1292582 -1.1419181 -1.213215 -1.1428651 -1.8159819 -1.8428197 -1.4751792 -2.1246967 -2.9295607 -1.2175179 -1.458914 -5.2540975 -2.6965609 -0.25673357 -1.9802661 -1.4791859 -2.3778896 -2.2576988 -2.724395 -1.3704951 -4.3583426 -0.5972382 -0.281455 -0.49513057 -0.68900996 -3.0312972 -0.28473428]]

Read more comments on GitHub >

github_iconTop Results From Across the Web

Getting low test accuracy? Compare the training and test sets ...
When creating the train and test datasets correctly, the classifier's mean accuracy drops from 0.93 to 0.53. This is exactly what we expected!...
Read more >
What if high validation accuracy but low test ... - Cross Validated
However, my best validation accuracy (52%) yields a very low test accuracy, e.g., 49%. Then, I have to report 49% as my overall...
Read more >
High train score, very low test score | Data Science ... - Kaggle
In this way our model can be trained and tested on different data, Testing accuracy is a better estimate than training accuracy of...
Read more >
Poor testing accuracy, while having very good training and ...
Make sure your training and testing data are picked randomly and represent as accurately as possible the same distribution and the real ...
Read more >
A Simple Intuition for Overfitting, or Why Testing on Training ...
The flaw with evaluating a predictive model on training data is that it does not inform you on how well the model has...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found