Service API MOO poor coverage of Pareto Front
See original GitHub issueHello, I am running a multi-objective optimization using Service API’s MOO model, and the resultant Pareto front seems to be highly localized, with poor search space coverage. A similar problem is mentioned in #800). I am looking for ways to improve that coverage.
I have a reference Pareto front from an analytical derivation for the same problem, and I am trying to replicate it with a simulation-based approach. I expect some discrepancies in the objective values, but overall, the two Pareto fronts should follow similar trends. The objective values for multiple parameterizations obtained with the ‘analytical’ approach (with a clear Pareto) front are shown below:
The posterior Pareto front which I obtained through the simulation-based approach using Ax is plotted here:
I used the reference threshold of 0.9 in both objectives, but the front is localized within a very small subregion (above 1.3). From the plot of consecutive evaluations, the algorithm heavily exploits that region. Do you think that increasing the number of Sobol steps could help here? For now, I used 10 Sobol + 90 MOO.
Finally, it’s worth mentioning that the experiment shown below was performed before I upgraded Ax to the newest version. I am including the code for reference (the parameter definitions etc., are fetched from a separate YAML file).
# Creating an experiment
if multiobjective:
ax_client.create_experiment(
name=opt_config['experiment_name'],
parameters=params,
objectives={i['name']: ObjectiveProperties(minimize=i['minimize'], threshold=i['threshold']) for i in
objective_config['objective_metrics']},
outcome_constraints=opt_config['outcome_constraints'])
else:
ax_client.create_experiment(
name=opt_config['experiment_name'],
parameters=params,
objective_name=objective_config['objective_metric'],
minimize=objective_config['minimize'], # Optional, defaults to False.
outcome_constraints=opt_config['outcome_constraints'])
NUM_OF_ITERS = opt_config['num_of_iters']
BATCH_SIZE = 1 # running sequential
# Initializing variables used in the iteration loop
abandoned_trials_count = 0
NUM_OF_BATCHES = NUM_OF_ITERS//BATCH_SIZE if NUM_OF_ITERS%BATCH_SIZE==0 else NUM_OF_ITERS//BATCH_SIZE+1
for i in range(NUM_OF_BATCHES):
try:
results = {}
trials_to_evaluate = {}
# Sequentially generate the batch
for j in range(min(NUM_OF_ITERS-i*BATCH_SIZE, BATCH_SIZE)):
parameterization, trial_index = ax_client.get_next_trial()
trials_to_evaluate[trial_index] = parameterization
# Evaluate the results in parallel and append results to a dictionary
for trial_index, parametrization in trials_to_evaluate.items():
with concurrent.futures.ProcessPoolExecutor(max_workers=3) as executor:
try:
exec = executor.submit(sim.get_results, parametrization)
results.update({trial_index: exec.result()})
except Exception as e:
ax_client.abandon_trial(trial_index=trial_index)
abandoned_trials_count += 1
print(f'[WARNING] Abandoning trial {trial_index} due to processing errors.')
print(e)
if abandoned_trials_count > 0.1 * NUM_OF_ITERS:
print('[WARNING] More than 10 % of iterations were abandoned. Consider improving the parametrization.')
for trial_index in results:
ax_client.complete_trial(trial_index, results.get(trial_index))
except KeyboardInterrupt:
print('Program interrupted by user')
break
`
Issue Analytics
- State:
- Created a year ago
- Comments:9 (7 by maintainers)
Top GitHub Comments
Hi @sdaulton, I think this is the case. I wasn’t fully aware of the extent of discrepancies between the two approaches when I posted, which led me to an incorrect conclusion about the poor PF coverage. Thanks a lot, @sgbaird, for your advice! And thanks a lot, Ax team, for this incredible software. I have been using it for a year now on multiple projects, and it doesn’t cease to amaze me.
@IgorKuszczak glad to hear things got cleared up, and agreed about the Ax platform! I’ve also been using it on multiple projects over the last 4 months or so. Really opens a lot of doors. Would love to see any manuscripts that come from this.