Different results between model.evaluate() and model.predict()
See original GitHub issueI got different results between model.evaluate() and model.predict(). Could someone point out what is wrong in my calculation as follows? Note that the model, X_test_features, y_regression_test are identical in two approaches.
Thank you very much!
- directly use model evaluate() to get loss and metrics:
model = define_top_model()
model.compile(loss='mse', optimizer='rmsprop', metrics=['mae', 'mape'])
model.load_weights(model_weights_file)
scores = model.evaluate(X_test_features, y_regression_test, batch_size=batch_size)
logger.info('mse=%f, mae=%f, mape=%f' % (scores[0],scores[1],scores[2]))
The output is: mse=0.551147, mae=0.589529, mape=10.979756
- get the preds numpy array using model.predict(), and use keras metrics to calculate metrics:
model = define_top_model()
model.compile(loss='mse', optimizer='rmsprop', metrics=['mae', 'mape'])
model.load_weights(model_weights_file)
preds = model.predict(X_test_features, batch_size=batch_size)
tf_session = K.get_session()
mse = metrics.mean_squared_error(y_regression_test, preds)
mae = metrics.mean_absolute_error(y_regression_test, preds)
mape = metrics.mean_absolute_percentage_error(y_regression_test, preds)
logger.info('mse=%f, mae=%f, mape=%f' % (mse.eval(session=tf_session),
mae.eval(session=tf_session),
mape.eval(session=tf_session)))
The output is: mse=0.678286, mae=0.654362, mape=12.249291
Issue Analytics
- State:
- Created 7 years ago
- Reactions:17
- Comments:13 (3 by maintainers)
Top Results From Across the Web
Keras - Model Evaluation and Model Prediction - Tutorialspoint
Evaluation is a process during development of the model to check whether the model is best fit for the given problem and corresponding...
Read more >Different Results for model.evaluate() compared to model()
Hi. I have trained a MobileNets model and in the same code used the model.evaluate() on a set of test data to determine...
Read more >What is the difference between keras.evaluate() and ... - Quora
keras.evaluate() is for evaluating your trained model. Its output is accuracy or loss, not prediction to your input data. keras.predict() ...
Read more >model.fit vs model.evaluate gives different results?
The following is a small snippet of the code, but I'm trying to understand the results of model.fit with train and test dataset...
Read more >Keras: model.evaluate vs model.predict accuracy difference in ...
It seems like metrics=['accuracy'] method. calculates accuracy automatically from cost function. So using binary_crossentropy shows binary ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@parkerzf The issue is that your y is the wrong shape and dtype, but keras automatically fixes the shape for you, giving different results if you use the model or don’t.
This line fixes the issue:
This script should have the same values through numpy, evaluate, or using the backend.
Cheers
How do I plot the confusion matrix in my code?