Adaptive hyper-parameter optimization
See original GitHub issueCurrent options for hyper-parameter optimization (grid search, random search) construct a task graph to search many options, submit that graph, and then wait for the result. However, we might instead submit only enough options to saturate our available workers, get some of those results back and then based on those results submit more work asynchronously. This might explore the parameter space more efficiently.
This would deviate from vanilla task graphs and would probably require the futures API (and the as_completed function specifically) which enforces a strict requirement on the dask.distributed scheduler.
Here is a simplified and naive implementation to show the use of dask.distributed.as_completed
initial_guesses = [...]
futures = [client.submit(fit_and_score, params) for params in initial_guesses]
seq = as_completed(futures, with_results=True)
for future, result in seq:
if good_enough():
break
params = compute_next_guess()
new_future = client.submit(fit_and_score, params)
seq.add(new_future)
I strongly suspect that scholarly work exists in this area to evaluate stopping criteria and compute optimial next guesses. What is the state of the art?
Additionally, I suspect that this problem is further complicated by pipelines. The hierarchical / tree-like structure of pipeline/gridsearch means that it probably makes sense to alter the parameters of the latter stages of the pipeline more often than the earlier stages. My simple code example above probably doesn’t do the full problem justice.
Issue Analytics
- State:
- Created 5 years ago
- Comments:13 (8 by maintainers)
Top GitHub Comments
Saw that #221 just got merged. Woohoo! Can’t wait to use this!
Yup, that’s the plan with #221.