confusing performance_metrics
See original GitHub issueHi,
I guess I am having the same issues as others ( #941 ?) to understand performance_metrics
What one would typically expect to have when doing k-CV is k values , for each validation metric.
In the case of prophet, if I understood correctly, k = # of cutoffs
(which should be also k = int( (df.shape[0] - initial - horizon) / period) + 1
)
(just as double check - can you confirm?)
-
how can I get a
df_p
(performance_metrics) whose size = # cutoffs? -
the confusion arises also from the fact that you are probably using term
horizon
with two different meanings: in the docs at the beginning it seems to be a fixed window size, while after, it becomes a column with many values…
Thanks
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (3 by maintainers)
Top Results From Across the Web
Guide to Confusion Matrices & Classification Performance ...
Confusion matrices can be used to calculate performance metrics for classification models. Of the many performance metrics used, the most common are accuracy, ......
Read more >Are You Confusing Vanity Metrics For Performance Metrics?
Vanity metrics, as many people call them. Most definitely are not meaningful performance metrics that drive learning and improvement and ...
Read more >The Perils of Confusing Performance Measurement with ...
First, the authors have confused performance measurement with program evaluation. ... Their primary argument is that the child outcomes ...
Read more >KPI Results Confusing? 14 Ways To Rethink Key Performance ...
1. Declutter Your Dashboards And Tie KPIs To Revenue · 2. Measure Only Three KPIs That You Fully Own · 3. Make Sure...
Read more >The Five Traps of Performance Measurement
The moment you choose to manage by a metric, you invite your managers to manipulate it. Metrics are only proxies for performance. Someone...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
so, indeed, CV in time series is more tricky …
here is a nice post describing different possible approaches ( I guess the main ones – I am not a big expert) https://medium.com/@soumyachess1496/cross-validation-in-time-series-566ae4981ce4
I guess the devs here (@bletham , …) could think of integrating few other CV techniques in the future (like the Blocked CV, which is what @ivangvi basically proposed, if I understood…)
Great answers guys! Much appreciated, as often in this field of science the answer is: it depends. Very interesting to read about your considerations and trade-offs.