Skip to content

Constraints feature #45

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from
Closed

Conversation

jvmncs
Copy link

@jvmncs jvmncs commented Mar 7, 2017

As per #44, I've added a constraints parameter to the __init__ method for the BayesianOptimization class. I plan on adding a notebook in the examples folder detailing how to use the feature in a separate PR.

I had to delete the "warm up with random points" part of the acq_max function, as it allowed random searching outside of the constrained search space. Another issue that came up was that initialization points increasingly do not satisfy the constraints as larger sections of the input space are cut out by the constraints. The only obvious way to prevent this is to allow the user to input init_points = 0 in the maximize method, however this isn't currently accommodated by the code. I'll create a separate issue for this once this PR is merged.

@fmfn: feedback would be appreciated.

@fmfn
Copy link
Member

fmfn commented Mar 7, 2017

Nice, adding constraints to the ACQ optimization is indeed a good and simple idea, thanks for that!

A couple of notes:

  • The random seeds used as starting points for the minimize function are also uniformly distributed and hence likely to lie outside the bounds imposed by the initialization.
  • Warming up the acq optimization with random points turns out to be a really good an efficient way to avoiding pathological regions and getting stuck on local minimas of acq, so keeping is highly desirable.
  • Initializing with random points is not crucial, but is a nice to have.

All these points are practical issues that would not be a problem in an ideal scenario where ACQ optimization worked smoothly, nevertheless reality is not that nice and we have to compromise.

One way to solve all the points above is to sample points not uniformly, but according with the constraints. I believe this would be relatively easy to accomplish with rejection sampling --- only accepts points that obey all constraints. It would add a bit of overhead, but I think is a small price to pay to maintain such practical functionalities.

@jvmncs
Copy link
Author

jvmncs commented Mar 7, 2017

Thanks! You're right, it should just be a matter of checking the right condition based on each constraint. I'll see what I can do about it and tag you when I'm done.

@jvmncs jvmncs mentioned this pull request Mar 7, 2017
@Erotemic
Copy link
Contributor

Erotemic commented Dec 4, 2017

Is there any progress on merging this?

@jvmancuso If you need code for doing constrained random sampling, I've written something that should work:

def constrainted_uniform(lower, upper, size, constraints,
                         random_state=np.random):
    """
    Example:
        >>> bounds = np.array([[0, 10], [0, 10]])
        >>> def cons1(data):
        >>>     # model constraint x <= y (true is any non-negative value)
        >>>     x, y = data
        >>>     return y - x
        >>> constraints = [
        >>>     {'type': 'leq', 'fun': cons1}
        >>> ]
        >>> lower = bounds[:, 0]
        >>> upper = bounds[:, 1]
        >>> size = (3, bounds.shape[0])
        >>> random_state = np.random.RandomState(0)
        >>> constrainted_uniform(lower, upper, size, constraints, random_state)
        array([[ 4.37587211,  8.91773001],
               [ 6.02763376,  5.44883183],
               [ 5.68044561,  9.25596638]])
    """
    def satisfied(cons, x_try):
        if cons['type'] == 'eq':
            return cons['fun'](x_try) == 0
        elif cons['type'] == 'leq':
            return cons['fun'](x_try) >= 0
        else:
            raise ValueError(cons['type'])

    # Initial random sample
    x_seeds = random_state.uniform(lower, upper, size=size)
    unsatisfied_idx, = np.where(
        [all(satisfied(cons, x_try) for cons in constraints)
                     for x_try in x_seeds])

    # Reject and resample until everything is satisfied
    max_iters = min(100, (2 * size[0]) ** 2)
    n_iters = 0
    while len(unsatisfied_idx) > 0:
        if n_iters > max_iters:
            raise RuntimeError('took too long to find good points')
        n_invalid = len(unsatisfied_idx)
        new_size = (n_invalid,) + tuple(size[1:])
        new_seeds = random_state.uniform(lower, upper, size=new_size)

        needs_resample = [
            not all(satisfied(cons, x_try) for cons in constraints)
            for x_try in new_seeds
        ]

        x_seeds[unsatisfied_idx] = new_seeds
        unsatisfied_idx = unsatisfied_idx[needs_resample]
        n_iters += 1
    return x_seeds

@jvmncs
Copy link
Author

jvmncs commented Jan 11, 2018

Whoops! Somehow this got lost in the weeds for me. I'll have this ready for review by next week.

@Erotemic Thanks! I'll integrate your code into my solution.

@fmfn
Copy link
Member

fmfn commented Jul 6, 2018

I'm closing this since another PR does pretty much the same thing with tests and doc strings.

@fmfn fmfn closed this Jul 6, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants