question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

multi-objective optimization (e.g. qNEHVI) vs. scalarized objectives. Which to choose?

See original GitHub issue

I put together a tutorial illustrating the use of Ax’s multi-objective optimization functionality and comparing this against scalarization approaches. When using a scalarized quantity to compare performance, it makes sense that the scalarized objectives do better than MOO. However, when looking at Pareto fronts and comparing them against a naive scalarization approach (sum the two objectives), I was surprised to see that, in general, the naive scalarization Pareto fronts seem better. This was on a straightforward, 3-parameter task with a single local maximum AFAIK. The task is meant as a teaching demo (see e.g. notebook tutorials). In particular, the notebook is 6.1-multi-objective.ipynb, linked above.

I noticed that I regularly got the following warning during MOO:

c:\Users\<USERNAME>\Miniconda3\envs\sdl-demo\lib\site-packages\ax\modelbridge\transforms\winsorize.py:240: UserWarning:

Automatic winsorization isn't supported for an objective in `MultiObjective` without objective thresholds. Specify the winsorization settings manually if you want to winsorize metric frechet.

c:\Users\sterg\Miniconda3\envs\sdl-demo\lib\site-packages\ax\modelbridge\transforms\winsorize.py:240: UserWarning:

Automatic winsorization isn't supported for an objective in `MultiObjective` without objective thresholds. Specify the winsorization settings manually if you want to winsorize metric luminous_intensity.

Out of the sklearn preprocessing scalers, winsorization seems most comparable to sklearn’s RobustScaler (interesting that it was the 3rd hit when searching for winsorization sklearn). There’s also a winsorization function in sklearn. This is my attempt to frame it in light of things I’m somewhat familiar with.

  • Maybe I chose a poorly suited task to use for making this comparison.
  • Does anything seem amiss in the implementation?
  • Is part of the issue perhaps that I’m not specifying thresholds?
  • Open to any thoughts/feedback

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
sdaultoncommented, Oct 23, 2022

Ah thank for clarifying your setup. For the second MOO experiment on frechet and luminosity, what are the inferred objective thresholds? Also, it looks like those plots are gone from your notebook.

Should I refactor my hacky scalarized objective (where I sum the two objectives in the evaluate function) and use a proper ax.core.objective.ScalarizedObjective instead?

ScalarizedObjective will model the outcomes independently, whereas if you scalarize the metrics yourself and provide a single scalar metric to Ax, only thenscalarized metric will be modeled. If the objectives are quite correlated, then modeling the scalarized metric will likely give better results.

For plotting the observed metrics (including tracking metrics) for the evaluated designs (as in get_observed_pareto_frontier), it might be easier to follow this example : https://ax.dev/tutorials/multiobjective_optimization.html#Plot-empirical-data.

This style of plot is also nice because it shows the observations collected over time, which might provide more insight into the behavior of the method during data collecton

0reactions
sgbairdcommented, Oct 31, 2022

@sdaulton my bad, I thought I had responded to this already. I will need to go back and check what the inferred thresholds were. Thanks for the detailed response!

I plan to follow the example you linked and post the updated results here.

@lena-kashtelyan I think it’s resolved to a good enough point. Will close for now! Thanks for checking in.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Pareto Set Learning for Expensive Multi-Objective Optimization
The overall approach is novel, advances scalarization based multi-objective BO, produced good results, and has the potential to generate ...
Read more >
BoTorch · Bayesian Optimization in PyTorch
In this tutorial, we illustrate how to implement a simple multi-objective (MO) Bayesian Optimization (BO) closed loop in BoTorch. We use the parallel...
Read more >
Multi-objective optimization - Wikipedia
Multi-objective optimization is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than ...
Read more >
Multi-Objective Hyperparameter Optimization - arXiv
The distribution of solutions on the Pareto front is thus governed by the set of scalarizations chosen, and it is challenging to identify...
Read more >
Multi-objective Bayesian Optimization with Heuristic ... - bioRxiv
MOBO procedures tackle the scalarization-based multi-objective ... and its parallel formulation (qNEHVI) achieves computational gains and ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found