What happens to subsequent trials when an output constraint(s) is not satisfied?
See original GitHub issueWith Service API, the function I am optimizing returns two metrics (e.g. m1
and m2
), and I have defined one as the objective (m1
) and placed an output constraint on the other (m2 <= x
).
During the trials, if the output constraint is not satisfied, how is this recorded and how is this information used in a subsequent trial?
Is this information taken into consideration when sampling the next arm?
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
The Role and Interpretation of Pilot Studies in Clinical Research
Study components that are deemed infeasible or unsatisfactory should be modified in the subsequent trial or removed altogether.
Read more >Specify Constraints - MATLAB & Simulink - MathWorks
By default, when you create a controller object using the mpc command, no constraints exist. To include a constraint, set the appropriate controller...
Read more >PEDro scale
If on a literal reading of the trial report it is possible that a criterion was not satisfied, a point should not be...
Read more >Deterministic modeling: linear optimization with applications
The basic goal of the optimization process is to find values of the variables that minimize or maximize the objective function while satisfying...
Read more >Multiple Endpoints in Clinical Trials Guidance for Industry | FDA
The test of hypothesis determines whether (1) the trial results are ... The endpoint(s) that will be the basis for concluding that the...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Re: taking into account trails that do not satisfy constraints / or trails that satisfy constraints but objective values are not great
In each trial, the method takes data from all the past trails and builds GP model to capture the relationship between metrics and your tuning parameters. Therefore, although some trials are not great in terms of optimization goal, they help to learn the relationship so the optimization can avoid the bad region.
Method-wise, it’s based on bayesian optimization (https://ax.dev/docs/bayesopt.html) As we are getting more trials, you will converge to a global optimal. If your outcome constraint is feasible, the best trial from running the Service API will satisfy the outcome with best objective.
~Please clarify what you mean by: “During the trials, if the output constraint is not satisfied”~
~You shouldn’t be getting trials that don’t satisfy the constraints. Are you? If so, please include code snippet of how you set up your experiment and what trials you are seeing.~
Update: I see, I misunderstood your question. You are asking what happens if a trial ends up not satisfying an outcome constraint after evaluation. I believe that the answer is that that information is used to produce future trials that are more likely to satisfy the outcome constraint and that the trial that did not satisfy it will never be considered ‘best’ on the experiment.
cc @qingfeng10 if she wants to add anything re: methodology of taking into account trials that did not satisfy outcome constraints.