question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

confusing performance_metrics

See original GitHub issue

Hi,

I guess I am having the same issues as others ( #941 ?) to understand performance_metrics

What one would typically expect to have when doing k-CV is k values , for each validation metric. In the case of prophet, if I understood correctly, k = # of cutoffs (which should be also k = int( (df.shape[0] - initial - horizon) / period) + 1 ) (just as double check - can you confirm?)

  • how can I get a df_p (performance_metrics) whose size = # cutoffs?

  • the confusion arises also from the fact that you are probably using term horizon with two different meanings: in the docs at the beginning it seems to be a fixed window size, while after, it becomes a column with many values…

Thanks

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:9 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
ggrrllcommented, Jan 13, 2021

so, indeed, CV in time series is more tricky …

here is a nice post describing different possible approaches ( I guess the main ones – I am not a big expert) https://medium.com/@soumyachess1496/cross-validation-in-time-series-566ae4981ce4

I guess the devs here (@bletham , …) could think of integrating few other CV techniques in the future (like the Blocked CV, which is what @ivangvi basically proposed, if I understood…)

0reactions
ofwallaartcommented, Feb 22, 2021

Great answers guys! Much appreciated, as often in this field of science the answer is: it depends. Very interesting to read about your considerations and trade-offs.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Guide to Confusion Matrices & Classification Performance ...
Confusion matrices can be used to calculate performance metrics for classification models. Of the many performance metrics used, the most common are accuracy, ......
Read more >
Are You Confusing Vanity Metrics For Performance Metrics?
Vanity metrics, as many people call them. Most definitely are not meaningful performance metrics that drive learning and improvement and ...
Read more >
The Perils of Confusing Performance Measurement with ...
First, the authors have confused performance measurement with program evaluation. ... Their primary argument is that the child outcomes ...
Read more >
KPI Results Confusing? 14 Ways To Rethink Key Performance ...
1. Declutter Your Dashboards And Tie KPIs To Revenue · 2. Measure Only Three KPIs That You Fully Own · 3. Make Sure...
Read more >
The Five Traps of Performance Measurement
The moment you choose to manage by a metric, you invite your managers to manipulate it. Metrics are only proxies for performance. Someone...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found