Allow user-defined metrics
See original GitHub issueI would like to allow users to pass their own metrics into compute_
. Right now, we have to add a new metric to climpred first.
def my_mse(forecast, reference, dim='svd', **metric_kwargs):
return ((forecast - reference)**2).mean(dim)
@pytest.mark.parametrize('comparison', PM_COMPARISONS)
@pytest.mark.parametrize('metric',[my_mse])
def test_new_metric_passed_to_compute(pm_da_ds1d, pm_da_control1d, metric, comparison):
actual = compute_perfect_model(
pm_da_ds1d, pm_da_control1d, comparison=comparison, metric=metric)
expected = compute_perfect_model(
pm_da_ds1d, pm_da_control1d, comparison=comparison, metric='mse')
assert_allclose(actual, expected)
However, implementing this requires to re-structure metrics.py
and compute_
. constants.py
would need to generate DETERMINISTIC_HINDCAST_METRICS
, POSITIVELY_ORIENTED_METRICS
, … dynamically, because currently the logic of compute_
asks several times what kind of metric is applied.
I first thought of just allowing metric
to be a function
in get_metric_function
, but how does this function
then pass through the flow of compute_
? Would we just allow it to pass regardless the other checks (bad idea)?
This would allow users to quickly checkout new metrics in climpred
. If their metric is not available, there is a long way to implement new metrics currently (see https://github.com/bradyrx/climpred/pull/264).
So, there are two questions here: should we implement this? and if so, how best?
Issue Analytics
- State:
- Created 4 years ago
- Comments:14 (2 by maintainers)
Top GitHub Comments
Correct. I’d assume the latter is the much preferred way. You could move the docstrings there which would get pulled into the API and you can add anything unique to the given metric that the base class (Metric) doesn’t have.
so I shouldnt do:
but rather