question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Expanding Window Splitter

See original GitHub issue

Is your feature request related to a problem? Please describe.

The sliding window splitter already exists, but I would also like to request the expanding window splitter as described here.

Describe the solution you’d like The image in the link above describes how the splits can be achieved.

Describe alternatives you’ve considered Currently manually only using CutoffSplitter, but what if I want to use this in ForecastingGridSearchCV. Use of expanding window with ForecastingGridSearchCV is currently unavailable as far as I could tell.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
aiwaltercommented, Dec 21, 2020

I am just following this discussion and have some thoughts. I think the misunderstanding is that there is a hyperpaeameter tuning part and a model training part. The sliding window is part of the tuning and training is usually done after with all training data and the found best params.

I think the model can only learn from data that is has seen. When using a sliding window, old data is not shown to all models (at least in tuning mode, you might show all after tuning to really train it for inference). So if there is an important pattern in the old data that might appear in future again, a sliding window tuning can be a problem because you might select hyperparameters from a model that has not seen this old data.

This can be solved with using expanding window. The problem of an expanding window is however that most likely in empiricism, the models with the bigger training data will have the better performance (“the bigger the better”). So in expanding window there is always a bias to select hyperparams from a model that has seen more data. So both window splitter methods have pros and cons.

Anyway I think it would be good to have an ExpandingWindowSplitter, so people would just have more choice based on what they prefer?

0reactions
mloningcommented, Dec 21, 2020

I’ll close this one in favour of #552 to keep discussion in one place

Read more comments on GitHub >

github_iconTop Results From Across the Web

ExpandingWindowSplitter — sktime documentation
Expanding window splitter. Split time series repeatedly into an growing training set and a fixed-size test set. Test window is defined by forecasting...
Read more >
Expanding vs sliding window splitter ... - GitHub
I think that is what is usually called "expanding window" ? Mixing this up with the SlidingWindowSplitter class might be a bit confusing...
Read more >
Expanding window 5-split time-series cross-validation.
This study employed an expanding window (also known as forward-chaining) cross-validation with datasets partitioned into 6 nearly evenly distributed ...
Read more >
Why start using sktime for forecasting? - Towards Data Science
In Expanding Window we extend the training set by a fixed number of data points in each run. This way we create multiple...
Read more >
How to split your screen in Windows 10 | Digital Trends
1. Choose the window you want to snap to one half of the screen, then drag it to the edge. 2. If you...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found