Using Ax as a supplier of candidates for black box evaluation
See original GitHub issueHi,
I have been trying, in resent days, to use Ax for my task.
The use case: supplying X new candidates for evaluation, given known+pending evaluations. Our “evaluation” is a training & testing of an ML model done on a cloud sever. I just want to feed the results to the BO model, and get new points for evaluation = to have Ax power our HPO. No success yet.
In BoTorch, I achieved this goal, with these 5 lines at the core:
model = botorch.models.SingleTaskGP(X, Y)
mll = gpytorch.mlls.ExactMarginalLogLikelihood(model.likelihood, model)
botorch.fit.fit_gpytorch_model(mll)
acquisition_function = botorch.acquisition.qNoisyExpectedImprovement(model, X_baseline)
X_candidates_tensor = botorch.optim.joint_optimize(acquisition_function, bounds=bounds,
q=batch_size, num_restarts=1, raw_samples=len(X))
I’ve been trying to use BotorchModel via the developer API. Questions:
- Do I have to state an evaluation function when defining an “experiment”? In our use case the function is a “black box”: we have a platform for launching train jobs as resources are freed, and collecting evaluations when ready, and I want to get from Ax X new candidates for evaluation, as in the BoTorch example above.
- I couldn’t find how to load the known+pending evaluations to the model.
- Are the objective_weights, that the gen() function of BotorchModel requires, weights for low/high-fidelity evals?
Have I been looking at the wrong place? Should I have been using the service API (loosing some flexibility)? Could you please direct me to relevant examples in both APIs?
(One of my main reasons for shifting to Ax, is that I want in the future to optimize over a mixed domain: some parameters continuous, and some discrete; but this is a different question…)
Thanks a lot, Avi
Issue Analytics
- State:
- Created 4 years ago
- Reactions:3
- Comments:28 (13 by maintainers)
Top GitHub Comments
@avimit, the fix for the service API bug should now be on master, and the trials it’s generating for you should look more reasonable. Also, regarding the fact that it will not generate more trials after the first 5, if you need more trials in parallel in the beginning, check out this section of the Service API tutorial. At the end, there is an explanation of the flag you can use.
For the SEM being set to 0, I will update you when that behavior is fixed! Thank you, again, for pointing out!
@avimit , the service API definitely supports the asynchronous evaluation with proper handling of pending points. Please let us know if this works for you, and if you have any suggestions for how we could make this functionality clearer in the docs (I can see how calling the Developer API the “Developer API” is a little confusing, since all developers might think it’s the API for them 😉.
Re: your query about
objective_weights
, this has nothing to do with fidelities. It instead specifies how you should weight multiple outcomes, if using a multi-output GP to model your results. FWIW, we are actively working on having more first-class support for multi-fidelity BayesOpt in Ax/BoTorch, and it should be available in the coming months.