RunningAverage: Confusion matrix must have at least one example before it can be computed
See original GitHub issue🐛 Bug description
I tried to use a DiceCoefficient
metric based on ConfusionMatrix
in conjunction with a ignite.contrib.handlers.tqdm_logger.ProgressBar
. This gives the below NotComputableError
. When removing the running average metric from the metrics
argument of the progress bar, I still get this error suggesting it is not related to the progress bar (which is attached to the engine after the metrics have been attached). The error also persists when attaching the RunningAverage
after the DiceCoefficient
.
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 701, in run
return self._internal_run()
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 774, in _internal_run
self._handle_exception(e)
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 469, in _handle_exception
raise e
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 744, in _internal_run
time_taken = self._run_once_on_dataset()
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 848, in _run_once_on_dataset
self._handle_exception(e)
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 469, in _handle_exception
raise e
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 835, in _run_once_on_dataset
self._fire_event(Events.ITERATION_COMPLETED)
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/engine/engine.py", line 424, in _fire_event
func(*first, *(event_args + others), **kwargs)
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metric.py", line 340, in completed
result = self.compute()
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/running_average.py", line 97, in compute
self._value = self._get_src_value()
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/running_average.py", line 113, in _get_metric_value
return self.src.compute()
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in compute
materialized = [_get_value_on_cpu(i) for i in self.args]
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in <listcomp>
materialized = [_get_value_on_cpu(i) for i in self.args]
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 145, in _get_value_on_cpu
v = v.compute()
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in compute
materialized = [_get_value_on_cpu(i) for i in self.args]
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in <listcomp>
materialized = [_get_value_on_cpu(i) for i in self.args]
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 145, in _get_value_on_cpu
v = v.compute()
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in compute
materialized = [_get_value_on_cpu(i) for i in self.args]
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in <listcomp>
materialized = [_get_value_on_cpu(i) for i in self.args]
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 145, in _get_value_on_cpu
v = v.compute()
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in compute
materialized = [_get_value_on_cpu(i) for i in self.args]
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in <listcomp>
materialized = [_get_value_on_cpu(i) for i in self.args]
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 145, in _get_value_on_cpu
v = v.compute()
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in compute
materialized = [_get_value_on_cpu(i) for i in self.args]
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 90, in <listcomp>
materialized = [_get_value_on_cpu(i) for i in self.args]
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metrics_lambda.py", line 145, in _get_value_on_cpu
v = v.compute()
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/metric.py", line 589, in another_wrapper
return func(self, *args, **kwargs)
File "/hf/home/aschuh/tools/conda/envs/istn_dl4/lib/python3.8/site-packages/ignite/metrics/confusion_matrix.py", line 139, in compute
raise NotComputableError("Confusion matrix must have at least one example before it can be computed.")
ignite.exceptions.NotComputableError: Confusion matrix must have at least one example before it can be computed.
Environment
- PyTorch Version (e.g., 1.4): 1.8.1
- Ignite Version (e.g., 0.3.0): 0.4.5
- OS (e.g., Linux): CentOS 7
- How you installed Ignite (
conda
,pip
, source): conda - Python version: 3.8.10
- Any other relevant information:
Issue Analytics
- State:
- Created 2 years ago
- Comments:13 (3 by maintainers)
Top Results From Across the Web
ignite.metrics — PyTorch-Ignite v0.4.10 Documentation
The metric value is computed using the output of the engine's process_function ... must have at least one example before it can be...
Read more >python - TypeError: Metric ConfusionMatrixMetric is not ...
class ConfusionMatrixMetric(tf.keras.metrics.Metric): """ A custom Keras metric to compute the running average of the confusion matrix ...
Read more >Single-Page API Reference | Google Earth Engine
This is intended for use with statistical algorithms like median composites that need a certain amount of good data to perform well, but...
Read more >Running Average of Group Data
Please advise how should I set the DAX formula. Below a simplified example table: table.png. I have also created a Calendar table from...
Read more >Demystifying 'Confusion Matrix' Confusion | by SalRite
If you are Confused about Confusion Matrix, then I hope this post may help you understand it! Happy Reading. We will use the...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@schuhschuh Thank you for the questions! I’ll start with the second one:
by default,
output_transform
is called only in attached mode, (similar issue #2159). it is applied initeration_completed
:https://github.com/pytorch/ignite/blob/0e0200d7911776f5cd378f6ef29fde169836837c/ignite/metrics/running_average.py#L123 then https://github.com/pytorch/ignite/blob/0e0200d7911776f5cd378f6ef29fde169836837c/ignite/metrics/metric.py#L296 then this recursive call in case of
MetricsLambda
: https://github.com/pytorch/ignite/blob/0e0200d7911776f5cd378f6ef29fde169836837c/ignite/metrics/metric.py#L322 as you can seeMetricsLambda
doesn’t implement its owniteration_completed
so no propagation, onlyupdate
is recursive but not the wholeiteration_completed
method. so, yeah, thank you for pointing this out! I think this might be a use case for the #2159.@schuhschuh thank you for your patience!
Looks like
RunningAverage
inv0.4.5
is not able to updateMetricsLambda
and it’s dependency metrics, likeConfusionMatrix
in your case.https://github.com/pytorch/ignite/blob/fbedab9d2ab99d7160388a4c13418087972a7243/ignite/metrics/metrics_lambda.py#L83-L87
Here is another example that you can try to reproduce this kind of issue:
Recursive update for
MetricsLambda
was implemented in0.4.6
, see #2091:https://github.com/pytorch/ignite/blob/84e51c3b4987c16f6dd27a6abd2f9ff95da3dc6c/ignite/metrics/metrics_lambda.py#L85-L96
As you can see in this case update method will be invoked by
RunningAverage
and this time it will recursively update all dependent metrics.To solve this issue it is recommended to upgrade
ignite
to versionv0.4.6
. In, addition, please, note that inv0.4.6
no need to append metric when using withRunningAverage
, just attachrunning_avg
: