Which acquisition function should I use to iteratively improve model accuracy rather than optimize a target value?
See original GitHub issueIt took me a while to track down the issue and comment related to this, so I wanted to surface this in a new issue and immediately close it for better searchability.
I think there are two approaches. One option is to reformulate it as a problem where you minimize an error metric instead of a target value; this requires having a test set or some form of cross-validation that you trust. The other option which seems preferable, especially if one is adding new data or starting from scratch takes from @Balandat’s comment in https://github.com/facebook/Ax/issues/460#issuecomment-758428881 to use the botorch.acquisition.active_learning.qNegIntegratedPosteriorVariance
(qNIPV) acquisition function:
Yeah qNIPV is agnostic to the direction, the goal is to minimize a global measure of uncertainty of the model, so there is no better or worse w.r.t. the function values.
If I understand it correctly, this implies no exploitation, but rather pure exploration.
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:8 (4 by maintainers)
Top GitHub Comments
Hi @eytan, this looks like a really exciting approach that directly gets to what we’re after. Thanks for your input!
@iandoxsee , you may be interested in https://botorch.org/tutorials/constraint_active_search which aims to cover all designs which exceed some pre-specified threshold across multiple outcomes (constraints).