Get model metrics after fitting with compute_metrics=False
See original GitHub issueHello,
I’m training a retrieval model on a lot of data (>800 000 interactions) and a lot of unique items (> 300 000) using precomputed embeddings and contextual data as input.
Because of that large amount of unique items, computing the Top K Accuracy metric is very slow, so i deactivated it.
def __init__(self, user_model, candidate_model, task):
super().__init__()
self.candidate_model: tf.keras.Model = candidate_model
self.user_model: tf.keras.Model = user_model
self.task: tf.keras.layers.Layer = task
self.compute_metrics = False
def compute_loss(self, features, training=False) -> tf.Tensor:
hist, context, label = features
user_embeddings = self.user_model([hist, context])
positive_candidates_embeddings = self.candidate_model(label)
# The task computes the loss and the metrics.
return self.task(user_embeddings, positive_candidates_embeddings, compute_metrics = self.compute_metrics )
The problem is that now, I only have the loss value. Even when changing self.compute_metrics to true and evaluating the model using model.evaluate, I still have my metric to 0.
model.compute_metrics = True
results_eval = model.evaluate(val_gen, verbose=0)
print(results_eval)
results_eval : [0.0, 463.1528625488281, 0, 463.1528625488281]
So here is my question : Is there a way to compute the model’s metrics after deactivating it during fitting ?
Thanks for your help !
Issue Analytics
- State:
- Created 3 years ago
- Comments:9
Top Results From Across the Web
How to get accuracy of model using keras? - Stack Overflow
Just tried it in tensorflow==2.0.0 . With the following result: Given a training call like: history = model.fit(train_data, train_labels, ...
Read more >How to Use Metrics for Deep Learning with Keras in Python
After completing this tutorial, you will know: How Keras metrics work and how you can use them when training your models.
Read more >Training and evaluation with the built-in methods - TensorFlow
To train a model with fit() , you need to specify a loss function, an optimizer, and optionally, some metrics to monitor.
Read more >Keras Metrics: Everything You Need to Know - neptune.ai
Keras metrics are functions that are used to evaluate the performance of your deep learning model. Choosing a good metric for your problem...
Read more >How to evaluate a keras model? - Projectpro
We can evaluate the model by various metrics like accuracy, f1 score, etc. ... After fitting a model we want to evaluate the...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Yes, I found a workaround ! Here are the steps :
In your model, at the end of compute_loss, use
compute_metrics = not trainingwhen you return your task’s lossUse None as a metric when defining your task :
task = tfrs.tasks.Retrieval(metrics=None)Train your model as usual
Define a new metric on your model :
model.task.factorized_metrics = tfrs.metrics.FactorizedTopK(candidates=...), metrics=...)Compile your model
model.compile()model.evaluateshould now give the proper metricsHey @yunruili, It’s an argument that you supply when you call the retrieval layer. https://github.com/tensorflow/recommenders/blob/71f85dc0a023f108c09ff4721f526abc62852bb4/tensorflow_recommenders/tasks/retrieval.py#L95-L102
You should set in in the
compute_lossfunction of your subclassedtfrs.Model.