question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Accuracy and metrics with Model

See original GitHub issue

In #286 I briefly talk about the idea of separating the metrics computation (like the accuracy) from Model. At the moment, you can keep track of the accuracy in the logs (both history and console logs) easily with the flag show_accuracy=True in Model.fit(). Unfortunately this is limited to the accuracy and does not handle any other metrics that could be valuable to the user.

We could have the computation of these metrics outside of Model and call them with callbacks if one wants to keep track of them during training. It may be valuable for the future but for it could also raise some issues short term

  • It would be impossible to log the accuracy (or any other metrics) with the base logger as callbacks do not interact with each other. One solution would be to let the user create her own logger on a different level of verbosity (possibly by inheriting from the current BaseLogger).
  • We would have to think about how to access the training and validation set with callbacks.

Issue Analytics

  • State:closed
  • Created 8 years ago
  • Comments:11 (2 by maintainers)

github_iconTop GitHub Comments

18reactions
khozzycommented, Jun 6, 2016

Hi, I’m new to Keras. Please tell me how to properly implement a custom metric. My code looks like this (I’m using scikit-learn wrapper):

def custom_metric(act, pred):
    print("Ahh, I'm here")
    print(act)
    print(pred)
    return 0.2 # dummy

def create_baseline():
    model = Sequential()
    model.add(Dense(no_features, input_dim=no_features, init='normal', activation='relu'))
    model.add(Dense(1, init='normal', activation='sigmoid'))

    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[custom_metric])

    return model

estimator = KerasClassifier(build_fn=create_baseline, nb_epoch=epochs, batch_size=batch_size, verbose=0)

results = cross_val_score(estimator, X.as_matrix(), Y, cv=kfold, scoring=scoring)

Which leads to:

Ahh, I'm here
Tensor("dense_64_target:0", shape=(?, ?), dtype=float32)
Tensor("Sigmoid_56:0", shape=(?, 1), dtype=float32)

Can not convert a float into a Tensor or Operation.

How to write a general metric function that will work for all backends (or cast arguments act and pred to numpy arrays)? Do you guys have any examples?

Regards

15reactions
zillionarecommented, Mar 14, 2017

When we pass metrics = [‘accuracy’] in compile stage, what happend actually under the hood? Which kind of accuracy is computed, since keras has binary_accuracy, categorical_accuracy, … and others?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Evaluation Metrics Machine Learning - Analytics Vidhya
You build a model, get feedback from metrics, make improvements and continue until you achieve a desirable accuracy.
Read more >
Metrics to Evaluate your Machine Learning Algorithm
Metrics to Evaluate your Machine Learning Algorithm · Classification Accuracy · Logarithmic Loss · Confusion Matrix · Area Under Curve · F1 Score....
Read more >
Performance Metrics in Machine Learning [Complete Guide]
Classification accuracy is perhaps the simplest metric to use and implement and is defined as the number of correct predictions divided by the ......
Read more >
Model Evaluation Metrics in Machine Learning - KDnuggets
Accuracy is a common evaluation metric for classification problems. It's the number of correct predictions made as a ratio of all predictions ...
Read more >
Metrics - Keras
Accuracy metrics · Probabilistic metrics · Regression metrics · Classification metrics based on True/False positives & negatives · Image segmentation metrics · Hinge ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found