Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization

Nov 28, 2022

Speakers

About

Optimizing expensive-to-evaluate black-box functions of discrete (and potentially continuous) design parameters is a ubiquitous problem in scientific and engineering applications. Bayesian optimization (BO) is a popular sample-efficient method that selects promising designs to evaluate by optimizing an acquisition function (AF) over some domain with respect to a surrogate model. However, maximizing the AF over mixed or high-cardinality discrete search spaces is challenging as we cannot use standard gradient-based methods or evaluate the AF at every point in the search space. To address this issue, we propose using probabilistic reparameterization (PR). Instead of directly optimizing the AF over the search space containing discrete variables, we instead maximize the expectation of the AF over a probability distribution defined by continuous parameters. We prove that under suitable proposal probability distributions, the BO policy that maximizes the probabilistic objective is the same as that which maximizes the AF, and therefore, PR enjoys the same regret bounds as the underlying AF. Moreover, our approach admits provably convergent global optimization of the AF (an often neglected requisite for commonly-used BO regret bounds) using scalable, unbiased estimators of both the probabilistic objective and its gradient. We validate our approach empirically and demonstrate state-of-the-art optimization performance on many real-world applications. Lastly, we showcase that PR is complementary to (and benefits) recent work and naturally generalizes to settings with multiple objectives and black-box constraints.

Organizer

Like the format? Trust SlidesLive to capture your next event!

Professional recording and live streaming, delivered globally.

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2022