question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

For those who are struggling to find positions for many optimized parameters

See original GitHub issue

How to find the position correspondences for the random-named optimized parameters?

During the tuning process, there are some important outputs which can help to locate.

First, check the definition of get_space function in Hypers search space from the tuning outputs. You will find something like this:

get_space

This information helps us to know the naming for all parameters to be tuned.

Second, check the Resulting replaced keras model from the tuning outputs. Something like this:

space

You will see some codes are replaced by space['Dropout'], space['Dropout_1']. This is the corresponding positions for the optimized parameters. At this point, it’s easy to fill in the optimized parameters.

Hope this helps.

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:5

github_iconTop GitHub Comments

3reactions
sanesanyocommented, Sep 2, 2018

I have the following parameters in my model: model = Sequential() model.add(Dense(512, input_shape=(784,))) model.add(Activation(‘relu’)) model.add(Dropout({{uniform(0, 1)}})) model.add(Dense({{choice([256, 512, 1024])}})) model.add(Activation({{choice([‘relu’, ‘sigmoid’])}})) model.add(Dropout({{uniform(0, 1)}}))

model.add(Dense({{choice([128,256,512])}}))
model.add(Activation({{choice(['relu', 'sigmoid'])}}))
model.add(Dropout({{uniform(0, 1)}}))

model.add(Dense(10))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', metrics=['accuracy'],
              optimizer={{choice(['rmsprop', 'adam', 'sgd'])}})

model.fit(x_train, y_train,
          batch_size={{choice([64, 128])}},
          epochs=1,
          verbose=2,
          validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, verbose=0)

After the hyperas is done finding the most optimal hyperparameters, it gives the following info: {‘Activation’: 1, ‘Activation_1’: 1, ‘Dense’: 2, ‘Dense_1’: 0, ‘Dropout’: 0.1602501347478713, ‘Dropout_1’: 0.11729755246044238, ‘Dropout_2’: 1, ‘Dropout_3’: 0.41266207281071243, ‘add’: 1, ‘batch_size’: 1, ‘optimizer’: 1}

So I understand till ‘Dropout_1’, after that I don’t understand ‘Dropout_2’:1, ‘add’:1. I have only added three dense layers (with the first dense layer taking the input being fixed), except the output layer and here I am getting Dropout for 4 dense layers with ‘Dropout_2’ being 1. I am probably missing some caveat and is therefore hoping if someone could look at it and help me out.

Thanks a lot in advance.

1reaction
dumkarcommented, Mar 9, 2019

If you want optim.minimize to print the values instead of the indices of the best parameters, use eval_space=True as an extra argument.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to optimize your careers site to attract more candidates
Discover simple tactics and tips for optimizing your careers site and attracting more candidates.
Read more >
6 recommendations for optimizing a Spark job | by Simon Grah
In this article we have detailed a strategy for optimizing a Spark job. Its main objective is to provide a framework for anyone...
Read more >
Query Optimization Techniques in SQL Server: Parameter ...
In this blog post we will walk you through one of the SQL Query Optimization Techniques in SQL Server - Parameter Sniffing.
Read more >
Optuna Guide: How to Monitor Hyper-Parameter Optimization ...
We need to think about effective strategies to search for optimal hyper-parameter values. A naive approach to hyper-parameter search is grid ...
Read more >
FAQ — Optuna 3.0.4 documentation
How to save machine learning models trained in objective functions? How can I obtain reproducible optimization results? How are exceptions from trials handled?...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found