question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[tune] Cannot create an implementation based on HyperOpt or Optuna with Ray Tune

See original GitHub issue

What is your question?

I’m trying to create an implementation of Raytune with Hyperopt, I have gone through the provided tutorial, however this is not relevant for how I require my hyperopt implementation. I’m optimizing a Deep Neural Net architecture and use the HyperOpt fmin function for this (this is different than from the example provided for RayTune iirc). When I do try to base me of this example from the documentation then I keep on getting an Attribute error: “metric unknown” when I try to maximize the accuracy.

Example of how I run my search space in HyperOpt: fn = create_model creates the DNN model of which the architecture is to be optimized. Space holds all the possible hyperparameters that are to be considered. best_run = fmin(fn = create_model, space = test_space, algo = tpe.suggest, max_evals = 10, trials = trials)

Regarding the Optuna library there is no example provided, although this post claims that there are 4 different search algorithm available for Optuna. Can you point me in the right direction for this?

Ray version and other system information (Python version, TensorFlow version, OS): Python Version: 3.6 Tensorflow: 1.14 Ray: 0.8.6 OS: macOS Mojave 10.14.6

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:19 (11 by maintainers)

github_iconTop GitHub Comments

1reaction
richardliawcommented, Aug 19, 2020

RE: hparams: Hmm, maybe try pip install -U tensorboardX?

RE: accuracy: please make the following change -

validation_acc = np.amax(result.history['val_acc'])
  
- tune.report(**{'loss': -validation_acc, 'status': STATUS_OK, 'model': model})
+ tune.report(**{'accuracy': validation_acc, 'status': STATUS_OK, 'model': model})

That should fix things for you!

0reactions
richardliawcommented, Aug 30, 2022

Hey there, this issue is quite stale. Can we open a new issue to track? thanks!

On Sun, Aug 28, 2022 at 7:40 AM knilakshan20 @.***> wrote:

@richardliaw https://github.com/richardliaw I got a similar issue when reporting a dictionary as follows. `tune.report(**{‘harrells_c’:np.amax(results.history[‘harrells_c’])})

TensorFlow 2.8.2 ray 2.0.0

`model = builder(D,config)

model.compile(loss=losses[‘task1’], metrics=[metrics[‘task1’]], optimizer=optimizer) model.summary()

results = model.fit(x=dataset.batch(256), epochs=epochs, verbose=verbose, ) `

— Reply to this email directly, view it on GitHub https://github.com/ray-project/ray/issues/10142#issuecomment-1229475062, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABCRZZKY4YGIPIQJFCTMG4TV3N26RANCNFSM4QA3QMCA . You are receiving this because you were mentioned.Message ID: @.***>

Read more comments on GitHub >

github_iconTop Results From Across the Web

Search Algorithms (tune.search) — Ray 2.2.0
CFO (Cost-Frugal hyperparameter Optimization) is a hyperparameter search algorithm based on randomized local search. It is backed by the FLAML library. It ...
Read more >
Ray Tune FAQ — Ray 2.2.0 - the Ray documentation
In this case, we cannot use tune.sample_from because it doesn't support grid searching. The solution here is to create a list of valid...
Read more >
Key Concepts — Ray 2.2.0 - the Ray documentation
First, you define the hyperparameters you want to tune in a search space and pass them into a trainable that specifies the objective...
Read more >
Trial Schedulers (tune.schedulers) — Ray 2.2.0
Tune includes a distributed implementation of Population Based Training (PBT). This can be enabled by setting the scheduler parameter of tune.
Read more >
Running Tune experiments with Optuna — Ray 2.2.0
Similar to Ray Tune, Optuna is an automatic hyperparameter optimization ... [I 2022-07-22 15:21:47,769] A new study created in memory with name: optuna...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found