question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RunningAverage: Confusion matrix must have at least one example before it can be computed

See original GitHub issue

🐛 Bug description

I tried to use a DiceCoefficient metric based on ConfusionMatrix in conjunction with a ignite.contrib.handlers.tqdm_logger.ProgressBar. This gives the below NotComputableError. When removing the running average metric from the metrics argument of the progress bar, I still get this error suggesting it is not related to the progress bar (which is attached to the engine after the metrics have been attached). The error also persists when attaching the RunningAverage after the DiceCoefficient.

File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 701, in run
    return self._internal_run()
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 774, in _internal_run
    self._handle_exception(e)
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 469, in _handle_exception
    raise e
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 744, in _internal_run
    time_taken = self._run_once_on_dataset()
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 848, in _run_once_on_dataset
    self._handle_exception(e)
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 469, in _handle_exception
    raise e
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 835, in _run_once_on_dataset
    self._fire_event(Events.ITERATION_COMPLETED)
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 424, in _fire_event
    func(*first, *(event_args + others), **kwargs)
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metric.py", line 340, in completed
    result = self.compute()
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/running_average.py", line 97, in compute
    self._value = self._get_src_value()
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/running_average.py", line 113, in _get_metric_value
    return self.src.compute()
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in compute
    materialized = [_get_value_on_cpu(i) for i in self.args]
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in <listcomp>
    materialized = [_get_value_on_cpu(i) for i in self.args]
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 145, in _get_value_on_cpu
    v = v.compute()
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in compute
    materialized = [_get_value_on_cpu(i) for i in self.args]
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in <listcomp>
    materialized = [_get_value_on_cpu(i) for i in self.args]
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 145, in _get_value_on_cpu
    v = v.compute()
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in compute
    materialized = [_get_value_on_cpu(i) for i in self.args]
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in <listcomp>
    materialized = [_get_value_on_cpu(i) for i in self.args]
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 145, in _get_value_on_cpu
    v = v.compute()
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in compute
    materialized = [_get_value_on_cpu(i) for i in self.args]
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in <listcomp>
    materialized = [_get_value_on_cpu(i) for i in self.args]
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 145, in _get_value_on_cpu
    v = v.compute()
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in compute
    materialized = [_get_value_on_cpu(i) for i in self.args]
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in <listcomp>
    materialized = [_get_value_on_cpu(i) for i in self.args]
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 145, in _get_value_on_cpu
    v = v.compute()
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metric.py", line 589, in another_wrapper
    return func(self, *args, **kwargs)
  File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/confusion_matrix.py", line 139, in compute
    raise NotComputableError("Confusion matrix must have at least one example before it can be computed.")
ignite.exceptions.NotComputableError: Confusion matrix must have at least one example before it can be computed.

Environment

  • PyTorch Version (e.g., 1.4): 1.8.1
  • Ignite Version (e.g., 0.3.0): 0.4.5
  • OS (e.g., Linux): CentOS 7
  • How you installed Ignite (conda, pip, source): conda
  • Python version: 3.8.10
  • Any other relevant information:

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:13 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
trsvchncommented, Aug 18, 2021

@schuhschuh Thank you for the questions! I’ll start with the second one:

The work-around I thought of gives me another problem. I have a custom output_transform attached to ConfusionMatrix which converts the binary y_pred engine outputs into one-hot encoded 2-class vectors as required. This works fine when I just attach the dsc = DiceCoefficient(cm) metric lambda to the engine. However, when I use RunningAverage(DiceCoefficient(cm)), I receive the following error: …

by default, output_transform is called only in attached mode, (similar issue #2159). it is applied in iteration_completed:

https://github.com/pytorch/ignite/blob/0e0200d7911776f5cd378f6ef29fde169836837c/ignite/metrics/running_average.py#L123 then https://github.com/pytorch/ignite/blob/0e0200d7911776f5cd378f6ef29fde169836837c/ignite/metrics/metric.py#L296 then this recursive call in case of MetricsLambda: https://github.com/pytorch/ignite/blob/0e0200d7911776f5cd378f6ef29fde169836837c/ignite/metrics/metric.py#L322 as you can see MetricsLambda doesn’t implement its own iteration_completed so no propagation, only update is recursive but not the whole iteration_completed method. so, yeah, thank you for pointing this out! I think this might be a use case for the #2159.

1reaction
trsvchncommented, Aug 13, 2021

@schuhschuh thank you for your patience!

Looks like RunningAverage in v0.4.5 is not able to update MetricsLambda and it’s dependency metrics, like ConfusionMatrix in your case.

https://github.com/pytorch/ignite/blob/fbedab9d2ab99d7160388a4c13418087972a7243/ignite/metrics/metrics_lambda.py#L83-L87

Here is another example that you can try to reproduce this kind of issue:

# v0.4.5
...
    engine = Engine(process_function)

    acc = Accuracy()
    err = (1.0 - acc) * 100.0
    running_avg_err = RunningAverage(err)
    running_avg_err.attach(engine, "running_avg_err")

    engine.run(list(range(100)))
    return 0
...
NotComputableError: Accuracy must have at least one example before it can be computed.

Recursive update for MetricsLambda was implemented in 0.4.6, see #2091:

https://github.com/pytorch/ignite/blob/84e51c3b4987c16f6dd27a6abd2f9ff95da3dc6c/ignite/metrics/metrics_lambda.py#L85-L96

As you can see in this case update method will be invoked by RunningAverageand this time it will recursively update all dependent metrics.

To solve this issue it is recommended to upgrade ignite to version v0.4.6. In, addition, please, note that in v0.4.6 no need to append metric when using with RunningAverage, just attach running_avg:

# v0.4.6
...
    engine = Engine(process_function)

    cm = ConfusionMatrix(2)
    dsc = DiceCoefficient(cm)
    running_avg_dsc = RunningAverage(dsc)

    running_avg_dsc.attach(engine, "running_avg_dsc")

    pbar = ProgressBar()
    pbar.attach(engine, ["running_avg_dsc"])

    engine.run(list(range(100)))
    return 0
...
Read more comments on GitHub >

github_iconTop Results From Across the Web

ignite.metrics — PyTorch-Ignite v0.4.10 Documentation
The metric value is computed using the output of the engine's process_function ... must have at least one example before it can be...
Read more >
python - TypeError: Metric ConfusionMatrixMetric is not ...
class ConfusionMatrixMetric(tf.keras.metrics.Metric): """ A custom Keras metric to compute the running average of the confusion matrix ...
Read more >
Single-Page API Reference | Google Earth Engine
This is intended for use with statistical algorithms like median composites that need a certain amount of good data to perform well, but...
Read more >
Running Average of Group Data
Please advise how should I set the DAX formula. Below a simplified example table: table.png. I have also created a Calendar table from...
Read more >
Demystifying 'Confusion Matrix' Confusion | by SalRite
If you are Confused about Confusion Matrix, then I hope this post may help you understand it! Happy Reading. We will use the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found