question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Adaptive hyper-parameter optimization

See original GitHub issue

Current options for hyper-parameter optimization (grid search, random search) construct a task graph to search many options, submit that graph, and then wait for the result. However, we might instead submit only enough options to saturate our available workers, get some of those results back and then based on those results submit more work asynchronously. This might explore the parameter space more efficiently.

This would deviate from vanilla task graphs and would probably require the futures API (and the as_completed function specifically) which enforces a strict requirement on the dask.distributed scheduler.

Here is a simplified and naive implementation to show the use of dask.distributed.as_completed

initial_guesses = [...]
futures = [client.submit(fit_and_score, params) for params in initial_guesses]

seq = as_completed(futures, with_results=True)
for future, result in seq:
    if good_enough():
        break
    params = compute_next_guess()
    new_future = client.submit(fit_and_score, params)
    seq.add(new_future)

I strongly suspect that scholarly work exists in this area to evaluate stopping criteria and compute optimial next guesses. What is the state of the art?

Additionally, I suspect that this problem is further complicated by pipelines. The hierarchical / tree-like structure of pipeline/gridsearch means that it probably makes sense to alter the parameters of the latter stages of the pipeline more often than the earlier stages. My simple code example above probably doesn’t do the full problem justice.

cc @ogrisel @stsievert @GaelVaroquaux

Issue Analytics

  • State:open
  • Created 5 years ago
  • Comments:13 (8 by maintainers)

github_iconTop GitHub Comments

2reactions
jimmywancommented, Jun 15, 2019

Saw that #221 just got merged. Woohoo! Can’t wait to use this!

1reaction
stsievertcommented, Mar 8, 2019

I believe Scott is planning to finish it off in a couple weeks.

Yup, that’s the plan with #221.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Meta-Learning with Adaptive Hyperparameters
This line of works attempts to directly modify the conventional optimization algorithms to enable fast adaptation with few examples. One of the most...
Read more >
Hyperparameter optimization
In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm.
Read more >
AdaptiveHyperparameterTuning.pdf
To help machine learning engineers tune their deep learning models, Determined AI offers adaptive hyperparameter tuning, a practical and broadly applicable ...
Read more >
Adaptive Optimizer for Automated Hyperparameter ...
Abstract: The choices of hyperparameters have critical effects on the performance of machine learning models. In this paper, we present a ...
Read more >
Adaptive CV: An approach for faster cross validation and ...
Identification of optimal hyperparameters is an integral component for building robust accurate machine learning models.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found