-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Constraints feature #45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Nice, adding constraints to the ACQ optimization is indeed a good and simple idea, thanks for that! A couple of notes:
All these points are practical issues that would not be a problem in an ideal scenario where ACQ optimization worked smoothly, nevertheless reality is not that nice and we have to compromise. One way to solve all the points above is to sample points not uniformly, but according with the constraints. I believe this would be relatively easy to accomplish with rejection sampling --- only accepts points that obey all constraints. It would add a bit of overhead, but I think is a small price to pay to maintain such practical functionalities. |
Thanks! You're right, it should just be a matter of checking the right condition based on each constraint. I'll see what I can do about it and tag you when I'm done. |
Is there any progress on merging this? @jvmancuso If you need code for doing constrained random sampling, I've written something that should work: def constrainted_uniform(lower, upper, size, constraints,
random_state=np.random):
"""
Example:
>>> bounds = np.array([[0, 10], [0, 10]])
>>> def cons1(data):
>>> # model constraint x <= y (true is any non-negative value)
>>> x, y = data
>>> return y - x
>>> constraints = [
>>> {'type': 'leq', 'fun': cons1}
>>> ]
>>> lower = bounds[:, 0]
>>> upper = bounds[:, 1]
>>> size = (3, bounds.shape[0])
>>> random_state = np.random.RandomState(0)
>>> constrainted_uniform(lower, upper, size, constraints, random_state)
array([[ 4.37587211, 8.91773001],
[ 6.02763376, 5.44883183],
[ 5.68044561, 9.25596638]])
"""
def satisfied(cons, x_try):
if cons['type'] == 'eq':
return cons['fun'](x_try) == 0
elif cons['type'] == 'leq':
return cons['fun'](x_try) >= 0
else:
raise ValueError(cons['type'])
# Initial random sample
x_seeds = random_state.uniform(lower, upper, size=size)
unsatisfied_idx, = np.where(
[all(satisfied(cons, x_try) for cons in constraints)
for x_try in x_seeds])
# Reject and resample until everything is satisfied
max_iters = min(100, (2 * size[0]) ** 2)
n_iters = 0
while len(unsatisfied_idx) > 0:
if n_iters > max_iters:
raise RuntimeError('took too long to find good points')
n_invalid = len(unsatisfied_idx)
new_size = (n_invalid,) + tuple(size[1:])
new_seeds = random_state.uniform(lower, upper, size=new_size)
needs_resample = [
not all(satisfied(cons, x_try) for cons in constraints)
for x_try in new_seeds
]
x_seeds[unsatisfied_idx] = new_seeds
unsatisfied_idx = unsatisfied_idx[needs_resample]
n_iters += 1
return x_seeds |
Whoops! Somehow this got lost in the weeds for me. I'll have this ready for review by next week. @Erotemic Thanks! I'll integrate your code into my solution. |
I'm closing this since another PR does pretty much the same thing with tests and doc strings. |
As per #44, I've added a
constraints
parameter to the__init__
method for theBayesianOptimization
class. I plan on adding a notebook in the examples folder detailing how to use the feature in a separate PR.I had to delete the "warm up with random points" part of the
acq_max
function, as it allowed random searching outside of the constrained search space. Another issue that came up was that initialization points increasingly do not satisfy the constraints as larger sections of the input space are cut out by the constraints. The only obvious way to prevent this is to allow the user to inputinit_points = 0
in themaximize
method, however this isn't currently accommodated by the code. I'll create a separate issue for this once this PR is merged.@fmfn: feedback would be appreciated.