question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Service API MOO poor coverage of Pareto Front

See original GitHub issue

Hello, I am running a multi-objective optimization using Service API’s MOO model, and the resultant Pareto front seems to be highly localized, with poor search space coverage. A similar problem is mentioned in #800). I am looking for ways to improve that coverage.

I have a reference Pareto front from an analytical derivation for the same problem, and I am trying to replicate it with a simulation-based approach. I expect some discrepancies in the objective values, but overall, the two Pareto fronts should follow similar trends. The objective values for multiple parameterizations obtained with the ‘analytical’ approach (with a clear Pareto) front are shown below:

image

The posterior Pareto front which I obtained through the simulation-based approach using Ax is plotted here:

c7a77519-c75f-42ba-bd1b-6cbc2a40409a

I used the reference threshold of 0.9 in both objectives, but the front is localized within a very small subregion (above 1.3). From the plot of consecutive evaluations, the algorithm heavily exploits that region. Do you think that increasing the number of Sobol steps could help here? For now, I used 10 Sobol + 90 MOO.

02763c38-d5c6-42d8-b45c-c5bc436c10ca

Finally, it’s worth mentioning that the experiment shown below was performed before I upgraded Ax to the newest version. I am including the code for reference (the parameter definitions etc., are fetched from a separate YAML file).

    # Creating an experiment
    if multiobjective:
        ax_client.create_experiment(
            name=opt_config['experiment_name'],
            parameters=params,
            objectives={i['name']: ObjectiveProperties(minimize=i['minimize'], threshold=i['threshold']) for i in
                        objective_config['objective_metrics']},
            outcome_constraints=opt_config['outcome_constraints'])
    else:
        ax_client.create_experiment(
            name=opt_config['experiment_name'],
            parameters=params,
            objective_name=objective_config['objective_metric'],
            minimize=objective_config['minimize'],  # Optional, defaults to False.
            outcome_constraints=opt_config['outcome_constraints'])


    NUM_OF_ITERS = opt_config['num_of_iters']
    BATCH_SIZE = 1 # running sequential

    # Initializing variables used in the iteration loop
    
    abandoned_trials_count = 0
    NUM_OF_BATCHES = NUM_OF_ITERS//BATCH_SIZE if NUM_OF_ITERS%BATCH_SIZE==0 else NUM_OF_ITERS//BATCH_SIZE+1
    
    
    for i in range(NUM_OF_BATCHES):
        try:
            results = {}            
            trials_to_evaluate = {}
            # Sequentially generate the batch
            for j in range(min(NUM_OF_ITERS-i*BATCH_SIZE, BATCH_SIZE)):
                parameterization, trial_index = ax_client.get_next_trial()
                trials_to_evaluate[trial_index] = parameterization
            
            # Evaluate the results in parallel and append results to a dictionary
            for trial_index, parametrization in trials_to_evaluate.items():
                with concurrent.futures.ProcessPoolExecutor(max_workers=3) as executor:
                    try:
                        exec = executor.submit(sim.get_results, parametrization)
                        results.update({trial_index: exec.result()})
                    except Exception as e:
                       ax_client.abandon_trial(trial_index=trial_index)
                       abandoned_trials_count += 1
                       print(f'[WARNING] Abandoning trial {trial_index} due to processing errors.')
                       print(e)
                       if abandoned_trials_count > 0.1 * NUM_OF_ITERS:
                           print('[WARNING] More than 10 % of iterations were abandoned. Consider improving the parametrization.')
                
                    for trial_index in results:
                        ax_client.complete_trial(trial_index, results.get(trial_index))
                        
                    
        except KeyboardInterrupt:
               print('Program interrupted by user')
               break

`

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:9 (7 by maintainers)

github_iconTop GitHub Comments

4reactions
IgorKuszczakcommented, Apr 5, 2022

Hi @sdaulton, I think this is the case. I wasn’t fully aware of the extent of discrepancies between the two approaches when I posted, which led me to an incorrect conclusion about the poor PF coverage. Thanks a lot, @sgbaird, for your advice! And thanks a lot, Ax team, for this incredible software. I have been using it for a year now on multiple projects, and it doesn’t cease to amaze me.

2reactions
sgbairdcommented, Apr 5, 2022

@IgorKuszczak glad to hear things got cleared up, and agreed about the Ax platform! I’ve also been using it on multiple projects over the last 4 months or so. Really opens a lot of doors. Would love to see any manuscripts that come from this.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Troubleshooting MOO points focused in small region of search ...
Troubleshooting MOO points focused in small region of search space (via looking at acquisition ... Service API MOO poor coverage of Pareto Front...
Read more >
Lecture 5 - Multi-objective Evolutionary Algorithms - NTNU
A MOOP will have many alternative solutions in the feasible region. • This is because a solution that is optimal with respect to...
Read more >
Multi-Objective Optimization Ax API
Multi-objective optimization (MOO) covers the case where we care about multiple outcomes ... The solution in this case is to find a whole...
Read more >
Exact Pareto Optimal Search for Multi-Task Learning - DeepAI
In such cases, MTL models can use gradient-based multi-objective optimization (MOO) to find one or more Pareto optimal solutions.
Read more >
A Review on Multi-objective Optimization in Wireless Sensor ...
This paper presents a systematic review of MOO techniques in WSNs. ... The Pareto Front is the collection of all optimal non-dominated ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found