Is it possible to specify an outcome_constraint on ax_client.get_next_trial() with an exactly known (non-objective) outcome?
See original GitHub issue#273 seems very relevant to this. For my use-case #727, I’ve thought about having:
import pandas as pd
outcome_constraints = ["n_components <= 8.0"]
to constrain a compositional formula to have no more than 8
components above some low threshold (e.g. 1e-3
). In my case, n_components
is calculated very simply (and exactly) using only the parameters. For example:
def count_nonzero_components(parameters, tol=1e-3):
df = pd.DataFrame(parameters)
df[df < tol] = 0.0
n_components = np.count_nonzero(df, axis=1)
return n_components
However, the first outcome (the objective
) is not known a-priori and can only really be sampled via wet-lab synthesis. In #273, it seems like the outcome constraint metric is being estimated from the GP model (I could be wrong on this). Is there a way to “tell” ax_client.get_next_trial()
that n_components = count_nonzero_components(parameters)
and that n_components <= 8.0
?
I’ve been digging through custom generation strategies and custom acquisition functions https://github.com/facebook/Ax/issues/278, but without much luck so far. Similar to #278, I’m also using the Service API.
Issue Analytics
- State:
- Created 2 years ago
- Comments:8 (8 by maintainers)
Top GitHub Comments
Late to the party, but Lena is spot on with saying that
We could try to model this constraint as a black box function, but that will cause all kinds of issues and will also be quite inefficient (why do we learn the constraint if we already know it in closed form?).
I think that, as with your other issue, the correct approach here would be to use a custom generation strategy that is able to take in nonlinear constraints. If you look at your constraint
\sum_i 1{x_i > 1e-3} < 9
, you’ll see that it’s not just nonlinear but also non-convex (which I guess isn’t really too big of a problem since the acquisition function is usually non-convex also).So the proper solution here would be to change the API and allow Ax to take in some callable that evaluates the constraint and that we can pass to the optimizer. This is not too hard in principle, but b/c of the various transformations and normalizations that we apply in the modelbridge layer to both parameters and data, this can cause a bunch of headaches in practice. Essentially, allowing this would provide an excellent way for people to shoot themselves in the foot. That said, there clearly seems to be a need for this functionality, so maybe the right thing to do would be to throw together a proof of concept and just put slap a big red warning sign on it?
@sgbaird, this is definitely a tricky issue, and I think you are right that this approach would likely have issues:
cc @Balandat to check me on that one and also to overall check the thinking here