Mnist siamese example returning wrong accuracy values
See original GitHub issueI am executing the mnist siamese example straight away from the keras examples from the version 1.2.0. I am getting an accuracy value of
* Accuracy on training set: 0.42%
* Accuracy on test set: 2.64%
instead of the correct values
Gets to 99.5% test accuracy after 20 epochs.
At first I thought at a wrong image dimension ordering and tryed different configuration for keras.json But the problem persist. After inspecting prediction against the true values I found the vector of prediction is shifted by exactly by one, so I checked for possible code typos with some switch between 1 and 0 in the loss function and in the pair construction, but everything seems ok. I am running the code in a mac book pro with python 2.7 and tensorflow 0.12.1 Anyone who experienced the same issue?
-
Check that you are up-to-date with the master branch of Keras. You can update with: pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
-
If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.
-
If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with: pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
-
[] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).
Issue Analytics
- State:
- Created 7 years ago
- Comments:23 (2 by maintainers)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
After digging, it looks like @Tokukawa is right. There is a switching problem. According to Dimensionality Reduction by Learning an Invariant Mapping Y=0 if the pair deemd simliar and Y=1 the pair deemed disimilar.
This is exactly what
create_paris
does. Moreover, the loss function is reverse indeed. Themax
between 0 andm-D_w
should be the loss fordissimilar pairs
and therefore theloss function
is also reversed in the terms ofy_true