multi-objective optimization (e.g. qNEHVI) vs. scalarized objectives. Which to choose?
See original GitHub issueI put together a tutorial illustrating the use of Ax’s multi-objective optimization functionality and comparing this against scalarization approaches. When using a scalarized quantity to compare performance, it makes sense that the scalarized objectives do better than MOO. However, when looking at Pareto fronts and comparing them against a naive scalarization approach (sum the two objectives), I was surprised to see that, in general, the naive scalarization Pareto fronts seem better. This was on a straightforward, 3-parameter task with a single local maximum AFAIK. The task is meant as a teaching demo (see e.g. notebook tutorials). In particular, the notebook is 6.1-multi-objective.ipynb
, linked above.
I noticed that I regularly got the following warning during MOO:
c:\Users\<USERNAME>\Miniconda3\envs\sdl-demo\lib\site-packages\ax\modelbridge\transforms\winsorize.py:240: UserWarning:
Automatic winsorization isn't supported for an objective in `MultiObjective` without objective thresholds. Specify the winsorization settings manually if you want to winsorize metric frechet.
c:\Users\sterg\Miniconda3\envs\sdl-demo\lib\site-packages\ax\modelbridge\transforms\winsorize.py:240: UserWarning:
Automatic winsorization isn't supported for an objective in `MultiObjective` without objective thresholds. Specify the winsorization settings manually if you want to winsorize metric luminous_intensity.
- https://ax.dev/api/modelbridge.html#ax.modelbridge.transforms.winsorize.Winsorize
-
Clip the mean values for each metric to lay within the limits provided in the config.
-
- https://en.wikipedia.org/wiki/Winsorizing
-
Winsorizing or winsorization is the transformation of statistics by limiting extreme values in the statistical data to reduce the effect of possibly spurious outliers.
-
Out of the sklearn preprocessing scalers, winsorization seems most comparable to sklearn’s RobustScaler (interesting that it was the 3rd hit when searching for winsorization sklearn). There’s also a winsorization function in sklearn. This is my attempt to frame it in light of things I’m somewhat familiar with.
- Maybe I chose a poorly suited task to use for making this comparison.
- Does anything seem amiss in the implementation?
- Is part of the issue perhaps that I’m not specifying thresholds?
- Open to any thoughts/feedback
Issue Analytics
- State:
- Created a year ago
- Comments:6 (6 by maintainers)
Top GitHub Comments
Ah thank for clarifying your setup. For the second MOO experiment on frechet and luminosity, what are the inferred objective thresholds? Also, it looks like those plots are gone from your notebook.
ScalarizedObjective
will model the outcomes independently, whereas if you scalarize the metrics yourself and provide a single scalar metric to Ax, only thenscalarized metric will be modeled. If the objectives are quite correlated, then modeling the scalarized metric will likely give better results.For plotting the observed metrics (including tracking metrics) for the evaluated designs (as in
get_observed_pareto_frontier
), it might be easier to follow this example : https://ax.dev/tutorials/multiobjective_optimization.html#Plot-empirical-data.This style of plot is also nice because it shows the observations collected over time, which might provide more insight into the behavior of the method during data collecton
@sdaulton my bad, I thought I had responded to this already. I will need to go back and check what the inferred thresholds were. Thanks for the detailed response!
I plan to follow the example you linked and post the updated results here.
@lena-kashtelyan I think it’s resolved to a good enough point. Will close for now! Thanks for checking in.