Bayesian optimization experiment suggests same hyperparameters
See original GitHub issue/kind bug
What steps did you take and what happened: [A clear and concise description of what the bug is.]
I’ve experienced the same hyperparameters being suggested by a Bayesian optimization experiment several times.
- For example: with 1 parallel trial, and max trials of 4 the exact same hyperparameters are suggested for each trial
part of experiment yml
file:
apiVersion: "kubeflow.org/v1alpha3"
kind: Experiment
metadata:
namespace: kubeflow
name: poi-experiment35
labels:
controller-tools.k8s.io: "1.0"
spec:
objective:
type: maximize
goal: 0.99
objectiveMetricName: fbeta_m
metricsCollectorSpec:
source:
fileSystemPath:
path: "/ml/models/abc/eval/"
kind: Directory
collector:
kind: TensorFlowEvent
algorithm:
algorithmName: bayesianoptimization
algorithmSettings:
- name: "random_state"
value: "5"
parallelTrialCount: 1
maxTrialCount: 4
maxFailedTrialCount: 3
parameters:
- name: --tf_dense_size
parameterType: int
feasibleSpace:
min: "128"
max: "512"
- name: --tf_dense_dropout_rate
parameterType: double
feasibleSpace:
min: "0.01"
max: "0.5"
- name: --tf_learning_rate
parameterType: double
feasibleSpace:
min: "0.00001"
max: "0.001"
- name: --tf_optimizer
parameterType: categorical
feasibleSpace:
list:
- sgd
- rmsprop
- adam
- Another example: with 3 parallel trials, and 9 max trials, different hyperparameters are suggested for the first three trials, but for the following 6 trials the hyper-parameter combination from the best performing trial is suggested again for all six of the remaining trials.
part of experiment yml
file:
apiVersion: "kubeflow.org/v1alpha3"
kind: Experiment
metadata:
namespace: kubeflow
name: poi-experiment36
labels:
controller-tools.k8s.io: "1.0"
spec:
objective:
type: maximize
goal: 0.99
objectiveMetricName: fbeta_m
metricsCollectorSpec:
source:
fileSystemPath:
path: "/ml/models/abc/eval/"
kind: Directory
collector:
kind: TensorFlowEvent
algorithm:
algorithmName: bayesianoptimization
algorithmSettings:
- name: "random_state"
value: "15"
parallelTrialCount: 3
maxTrialCount: 9
maxFailedTrialCount: 3
parameters:
- name: --tf_dense_size
parameterType: int
feasibleSpace:
min: "128"
max: "512"
- name: --tf_dense_dropout_rate
parameterType: double
feasibleSpace:
min: "0.01"
max: "0.5"
- name: --tf_learning_rate
parameterType: double
feasibleSpace:
min: "0.00001"
max: "0.001"
- name: --tf_optimizer
parameterType: categorical
feasibleSpace:
list:
- sgd
- rmsprop
- adam
What did you expect to happen: Different hyperparameters combinations to be suggested.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] I was just wondering if other people have experienced something similar.
Environment: Minikube version: N/A. Deployed on GCP.
- Kubeflow version:
kfctl v1.0-rc.1-0-g963c787
- Kubernetes version: (use
kubectl version
):
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:11:03Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-gke.33", GitCommit:"2c6d0ee462cee7609113bf9e175c107599d5213f", GitTreeState:"clean", BuildDate:"2020-01-15T17:47:46Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
OS: linux
Issue Analytics
- State:
- Created 4 years ago
- Reactions:2
- Comments:7 (4 by maintainers)
Top Results From Across the Web
A Conceptual Explanation of Bayesian Hyperparameter ...
At a high-level, Bayesian optimization methods are efficient because they choose the next hyperparameters in an informed manner. The basic idea ...
Read more >Tune Experiment Hyperparameters by Using Bayesian ...
The experiment uses Bayesian optimization to find the combination of hyperparameters that minimizes a custom metric function.
Read more >Benchmarking the performance of Bayesian optimization ...
While RF has potentially more hyperparameters such as n tree, max depth, and max split to select, it is less penalized for sub-optimal...
Read more >Bayesian Optimization for quicker hyperparameter tuning
Hyperparameter tuning by means of Bayesian reasoning, or Bayesian Optimisation, can bring down the time spent to get to the optimal set of...
Read more >Hyperparameter Optimization using bayesian optimization
The RandomSearch algorithm is pretty similar, but instead of using all possible combinations, it randomly assigns a value (within a defined ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Yes, we can create
Optimizer
instance only for the firstGetSuggestion
call.Search space Modification should not be allowed in the same experiment. it should be prevented now. See https://github.com/kubeflow/katib/issues/768
In that case, can we do this when we instantiate the algorithm