[Feature Request] Safe optimization #2240
Replies: 5 comments
-
I take it in your setting the value of the constraint function is observable? I.e. it's something like
Yeah that's the obvious thing to try. You could use One thing to note is that scipy's default nonlinear optimization algorithms aren't that great and this can get pretty slow, especially if the constraint function is complicated and there are a lot of observed data points. The other things is that - depending on the problem - the feasible region may be pretty nasty and potentially disconnected. So finding good feasible initial condition would be important. Also make sure to set the botorch/botorch/optim/optimize.py Lines 471 to 485 in 520468e Also, there is this recent relevant paper (which I haven't read in detail yet, but @jduerholt may have some thoughts on :)) and references therein: https://arxiv.org/abs/2402.07692 |
Beta Was this translation helpful? Give feedback.
-
A few thoughts from my side:
In case that the black box can be measured, the standard approach in botorch would be to have it as an output constraint in which the In the paper mentioned by @Balandat we tackled the problem of a non measurable black box constraint by setting them up as a Regarding cyipopt: this is also on my long list for things that I want to try in botorch. |
Beta Was this translation helpful? Give feedback.
-
That is indeed the general approach; one thing that it doesn't specifically do though is keep the probability of violation during the exploration controlled in a specific way. This is b/c the target is a probability-weighted acquisition function; e.g. in the case of EI you'd optimize |
Beta Was this translation helpful? Give feedback.
-
Thanks to both of you for the discussion of this problem. Its correct that the black box constraint is an observable. I'll take a look at the different methods you discussed and see which would work best for our use case. While the ad hoc method of adding an offset to the output constraint is appealing due to its simplicity, it would be nice to specify the trade-off between exploration/optimization and constraint violations more rigorously. |
Beta Was this translation helpful? Give feedback.
-
I'm going to convert this into a discussion item |
Beta Was this translation helpful? Give feedback.
-
🚀 Feature Request
I'm looking to perform constrained optimization is a "safer" context where frequent violations of the constraint cannot be tolerated. To do this, I would like to be able to specify a constraint on optimizing the acquisition function where points are only proposed if the constraint GP model predicts a X% probability of satisfying the constraint (for example, 90% chance of satisfying the constraint). Any ideas on how to best implement this?
Motivation
Is your feature request related to a problem? Please describe.
We are attempting to perform BO in particle accelerator control problems where violating the constraint can potentially lead to equipment damage so we need to be 50-99% confident that constraints are satisfied (while being able to trade safety for optimization speed in non-critical situations).
This would be an alternative to the SafeOpt approach described here https://arxiv.org/abs/1902.03229, which limits the acquisition function domain along a 1D subspace.
Pitch
Describe the solution you'd like
It seems like the best way to do this is to add a constraint to the
optimize_acqf
with a helper method that defines the constraining function based on a calculation of the probability of violating the constraint.Describe alternatives you've considered
None
Are you willing to open a pull request? (See CONTRIBUTING)
Yes, just looking for some guidance.
Beta Was this translation helpful? Give feedback.
All reactions