Breaking change in metrics behaviour
See original GitHub issueš Bug description
Following https://github.com/pytorch/ignite/pull/968 (and associated issue ) metricās output is flatten if it returns a mapping/dict. In details, with ignite v0.4.2, for a custom metricās we have the following:
class PerformanceIndicators(Metric):
def compute(self):
# ...
return {
'a': 12,
'b': 23
}
PerformanceIndicators().attach(evaluator, name="indicators")
assert "a" in evaluator.state.metrics
assert "b" in evaluator.state.metrics
# This is a breaking change introduced by the PR
assert not "indicators" in evaluator.state.metrics
print(evaluator.state.metrics)
> {'a': 12, 'b': 23}
The questions we would like to address in this issue:
- Should we fix the breaking change by readding the dict into
evaluator.state.metrics
such that weāll have:
print(evaluator.state.metrics)
> {'a': 12, 'b': 23, 'indicators': {'a': 12, 'b': 23}}
name
parameter (e.g. āindicatorsā) is never used. Should we append it to the flatten names (e.g.indicators/a
,indicators/b
) or we can accept that it is never used ?
Thanks @lidq92 for reporting this.
Environment
- PyTorch Version (e.g., 1.4): 1.6.0
- Ignite Version (e.g., 0.3.0): 0.4.2
- OS (e.g., Linux):
- How you installed Ignite (
conda
,pip
, source): - Python version:
- Any other relevant information:
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (4 by maintainers)
Top Results From Across the Web
Using Metrics to Drive Targeted Improvements In Team ...
This is where metrics become an invaluable tool: With carefully crafted metrics, common pitfalls are avoided, positive changes in team behaviour are broughtĀ ......
Read more >Performance metrics drive behavior - The Fabricator
But instituting helpful performance metrics requires thorough planning. ... An important measure might be ādefects per shift.
Read more >Testing for breaking changes - Compositional IT
We can use some test libraries to prove or disprove breaking changes in code. Isaac shows how we used it to help prove...
Read more >Break the Bad Habit of Overreacting to Metrics - CFO Magazine
It's possible that a metric might just be fluctuating around an average, within a typical range. We can call that āroutine variation.ā It...
Read more >An Appropriate Use of Metrics - Martin Fowler
Numbers focus people and help us measure success." Whilst well intentioned, management by numbers unintuitively leads to problematic behavior ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@Yevgnen thanks for pointing out about that ! To avoid duplicated plots, for now it is possible to specify which metrics to plot instead of using āallā :
Seeing the Wandb loggerās code, we log everything: https://github.com/pytorch/ignite/blob/9230a7319047b37ce19d956e024fa1b86030c30a/ignite/contrib/handlers/wandb_logger.py#L271-L275
however, we check the data type for others (Visdom, MLflow, Polyaxon, ClearML, ā¦), e.g. Neptune https://github.com/pytorch/ignite/blob/9230a7319047b37ce19d956e024fa1b86030c30a/ignite/contrib/handlers/neptune_logger.py#L344-L351
Probably, we would like to make that uniform for all loggers ā¦
Iāll create new issue for uniform approach between loggers.