question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Many metrics cannot handle predictions out of [0..1] range

See original GitHub issue

System information

  • Have I written custom code (as opposed to using a stock example script provided in Keras): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 Pro
  • TensorFlow installed from (source or binary): Binary
  • TensorFlow version (use command below): v2.7.0-rc1-69-gc256c071bb2 2.7.0
  • Python version: 3.9.6
  • Bazel version (if compiling from source): N/A
  • GPU model and memory: N/A
  • Exact command to reproduce: N/A

Describe the problem.

When using BinaryCrossentropy(from_logits=True) loss, which is the non-default but recommended setting per BinaryCrossentropy documentation, looks like prediction values can be out of [0…1] range and this breaks many metrics.

Below list of metrics are known to be broken:

keras.metrics.TruePositives
keras.metrics.FalsePositives
keras.metrics.TrueNegatives
keras.metrics.FalseNegatives
keras.metrics.Precision
keras.metrics.Recall
keras.metrics.AUC

With any of these metrics in place, model.fit(..) call fails with the below error:

tensorflow.python.framework.errors_impl.InvalidArgumentError:  assertion failed: [predictions must be <= 1]  

or

tensorflow.python.framework.errors_impl.InvalidArgumentError:  assertion failed: [predictions must be >= 0]  

Traceback is in the logs section.

Describe the current behavior.

model.fit(..) call fails with BinaryCrossentropy(from_logits=True) loss and any of the metrics listed above.

Describe the expected behavior.

model.fit(..) call succeds.

Standalone code to reproduce the issue.

Below code demonstrates the issue:

import numpy as np
import tensorflow as tf
from tensorflow import keras

x_train = np.random.rand(50,5)
y_train = np.random.rand(50,)

model = tf.keras.models.Sequential([
  tf.keras.layers.Dense(50, activation='relu', name='hidden_1', input_shape=(x_train.shape[1],)),
  tf.keras.layers.Dense(1, activation=None, name='output')
])

METRICS = [
      keras.metrics.Precision(name='precision'),
]

model.compile(optimizer=tf.keras.optimizers.Adam(),
  loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), 
  metrics=METRICS)

model.fit(x_train, y_train, epochs=20)

** Additional Info**

Please see below post in the TensorFlow forum for a discussion: https://discuss.tensorflow.org/t/metrics-related-predictions-must-be-1-error/6144

Below closed issue looks relevant: https://github.com/tensorflow/tensorflow/issues/42182

Source code / logs.

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\eager\execute.py", line 58, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError:  assertion failed: [predictions must be <= 1] [Condition x <= y did not hold element-wise:] [x (sequential_4/output/BiasAdd:0) = ] [[1.00585222][1.00123906][0.880351603]...] [y (Cast_3/x:0) = ] [1]
         [[node assert_less_equal/Assert/AssertGuard/Assert
 (defined at C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\utils\metrics_utils.py:612)
]] [Op:__inference_train_function_29922]

Errors may have originated from an input operation.
Input Source operations connected to node assert_less_equal/Assert/AssertGuard/Assert:
In[0] assert_less_equal/Assert/AssertGuard/Assert/assert_less_equal/All:
In[1] assert_less_equal/Assert/AssertGuard/Assert/data_0:
In[2] assert_less_equal/Assert/AssertGuard/Assert/data_1:
In[3] assert_less_equal/Assert/AssertGuard/Assert/data_2:
In[4] assert_less_equal/Assert/AssertGuard/Assert/sequential_4/output/BiasAdd:
In[5] assert_less_equal/Assert/AssertGuard/Assert/data_4:
In[6] assert_less_equal/Assert/AssertGuard/Assert/Cast_3/x:

Operation defined at: (most recent call last)
>>>   File "<stdin>", line 1, in <module>
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\utils\traceback_utils.py", line 64, in error_handler
>>>     return fn(*args, **kwargs)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\engine\training.py", line 1216, in fit
>>>     tmp_logs = self.train_function(iterator)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\engine\training.py", line 878, in train_function
>>>     return step_function(self, iterator)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\engine\training.py", line 867, in step_function
>>>     outputs = model.distribute_strategy.run(run_step, args=(data,))
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\engine\training.py", line 860, in run_step
>>>     outputs = model.train_step(data)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\engine\training.py", line 817, in train_step
>>>     self.compiled_metrics.update_state(y, y_pred, sample_weight)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\engine\compile_utils.py", line 460, in update_state
>>>     metric_obj.update_state(y_t, y_p, sample_weight=mask)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\utils\metrics_utils.py", line 73, in decorated
>>>     update_op = update_state_fn(*args, **kwargs)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\metrics.py", line 177, in update_state_fn
>>>     return ag_update_state(*args, **kwargs)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\metrics.py", line 1069, in update_state
>>>     return metrics_utils.update_confusion_matrix_variables(
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\utils\metrics_utils.py", line 612, in update_confusion_matrix_variables
>>>     tf.compat.v1.assert_less_equal(
>>>

Function call stack:
train_function -> assert_less_equal_Assert_AssertGuard_false_28872

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:2
  • Comments:19 (11 by maintainers)

github_iconTop GitHub Comments

4reactions
haifeng-jincommented, Dec 9, 2021

Triage notes: To make metrics compatible with the loss with from_logits=True, we should add this from_logits to the metrics, too.

2reactions
SysuJaycecommented, Dec 20, 2021

Need a parameter from_logits in the official metrics classes.

I found that bce with sigmoid is not stable in tf (2.4.4). So I prefer bce with logits.

Therefore, it will be more friendly if there is a from_logits parameter in the official metrics classes.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Classification metrics can't handle a mix of binary and ...
And this cause is clearly hinted at the error message itself: Classification metrics can't handle a mix of binary and continuous target.
Read more >
valueerror: classification metrics can't handle a mix of ...
The problem is that you compute your metric only on the label predicted by the highest value of your output vector with only...
Read more >
Performance Metrics in Machine Learning [Complete Guide]
Performance metrics are a part of every machine learning pipeline. They tell you if you're making progress, and put a number on it....
Read more >
4. Regression and Prediction - Practical Statistics for Data ...
Multiple linear regression models the relationship between a response variable Y and multiple predictor variables X 1 , ... , X p ....
Read more >
How to Make Predictions with Keras
This tutorial is divided into 3 parts; they are: Finalize Model; Classification Predictions; Regression Predictions. 1. Finalize Model. Before ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found