Bayesian Quadrature (BQ) object

class bayesian_quadrature.bq.BQ(x, l, **options)[source]

Bases: object

Estimate an integral of the following form using Bayesian Quadrature with a Gaussian Process prior:

\[Z = \int \ell(x)\mathcal{N}(x\ |\ \mu, \sigma^2)\ \mathrm{d}x\]

See load_options for details on allowable options.

Parameters :

x : numpy.ndarray

size \(s\) array of sample locations

l : numpy.ndarray

size \(s\) array of sample observations

options : dict

Options dictionary

Notes

This algorithm is an updated version of the one described in [OD12]. The overall idea is:

  1. Estimate \(\log\ell\) using a GP.
  2. Estimate \(\bar{\ell}=\exp(\log\ell)\) using second GP.
  3. Integrate exactly under \(\bar{\ell}\).
Z_mean()[source]

Computes the mean of \(Z\), which is defined as:

\[\mathbb{E}[Z]=\int \bar{\ell}(x)p(x)\ \mathrm{d}x\]
Returns :mean : float
Z_var()[source]

Computes the variance of \(Z\), which is defined as:

\[\mathbb{V}(Z)\approx \int\int \mathrm{Cov}_{\log\ell}(x, x^\prime)\bar{\ell}(x)\bar{\ell}(x^\prime)p(x)p(x^\prime)\ \mathrm{d}x\ \mathrm{d}x^\prime\]
Returns :var : float
add_observation(x_a, l_a)[source]
choose_next(x_a, n, params, plot=False)[source]
copy(deep=True)[source]
expected_Z_var(x_a)[source]

Computes the expected variance of \(Z\) given a new observation \(x_a\). This is defined as:

\[\mathbb{E}[V(Z)\ |\ \ell_s, \ell_a] = \mathbb{E}[Z\ |\ \ell_s]^2 + V(Z\ |\ \ell_s) - \int \mathbb{E}[Z\ |\ \ell_s, \ell_a]^2 \mathcal{N}(\ell_a\ |\ \hat{m}_a, \hat{C}_a)\ \mathrm{d}\ell_a\]
Parameters :

x_a : numpy.ndarray

vector of points for which to (independently) compute the expected variance

Returns :

out : expected variance for each point in x_a

expected_mean(x_a)[source]

Computes the expected mean of \(Z\) given a new observation \(x_a\).

Parameters :

x_a : numpy.ndarray

vector of points for which to (independently) compute the expected mean

Returns :

out : expected mean for each point in x_a

expected_squared_mean(x_a)[source]

Computes the expected square mean of \(Z\) given a new observation \(x_a\). This is defined as:

\[\mathbb{E}[\mathbb{E}[Z]^2 |\ \ell_s] = \int \mathbb{E}[Z\ |\ \ell_s, \ell_a]^2 \mathcal{N}(\ell_a\ |\ \hat{m}_a, \hat{C}_a)\ \mathrm{d}\ell_a\]
Parameters :

x_a : numpy.ndarray

vector of points for which to (independently) compute the expected squared mean

Returns :

out : expected squared mean for each point in x_a

expected_squared_mean_and_mean(x_a)[source]

Computes the expected squared mean and expected mean of \(Z\) given a new observation \(x_a\).

Parameters :

x_a : numpy.ndarray

vector of points for which to (independently) compute the expected mean

Returns :

out : expected squared mean and expected mean for each point

in x_a

fit_hypers(params)[source]
gp_l = None

Gaussian process over exp(log(l))

gp_log_l = None

Gaussian process over log(l)

init(params_tl, params_l)[source]

Initialize the GPs.

Parameters :

params_tl : np.ndarray

initial parameters for GP over \(\log\ell\)

params_l : np.ndarray

initial parameters for GP over \(\exp(\log\ell)\)

l_c = None

Vector of candidate values

l_mean(x)[source]

Mean of the final approximation to \(\ell\).

Parameters :

x : numpy.ndarray

\(m\) array of new sample locations.

Returns :

mean : numpy.ndarray

\(m\) array of predictive means

Notes

This is just the mean of the GP over \(\exp(\log\ell)\), i.e.:

\[\mathbb{E}[\bar{\ell}(\mathbf{x})] = \mathbb{E}_{\mathrm{GP}(\exp(\log\ell))}(\mathbf{x})\]
l_s = None

Vector of observed values

l_sc = None

Vector of observed plus candidate values

l_var(x)[source]

Marginal variance of the final approximation to \(\ell\).

Parameters :

x : numpy.ndarray

\(m\) array of new sample locations.

Returns :

mean : numpy.ndarray

\(m\) array of predictive variances

Notes

This is just the diagonal of the covariance of the GP over \(\log\ell\) multiplied by the squared mean of the GP over \(\exp(\log\ell)\), i.e.:

\[\mathbb{V}[\bar{\ell}(\mathbf{x})] = \mathbb{V}_{\mathrm{GP}(\log\ell)}(\mathbf{x})\mathbb{E}_{\mathrm{GP}(\exp(\log\ell))}(\mathbf{x})^2\]
load_options(kernel, n_candidate, candidate_thresh, x_mean, x_var, optim_method)[source]

Load options.

Parameters :

kernel : Kernel

The type of kernel to use. Note that if the kernel is not Gaussian, slow approximate (rather than analytic) solutions will be used.

n_candidate : int

The (maximum) number of candidate points.

candidate_thresh : float

Minimum allowed space between candidates.

x_mean : float

Prior mean, \(\mu\).

x_var : float

Prior variance, \(\sigma^2\).

optim_method : string

Method to use for parameter optimization (e.g., ‘L-BFGS-B’ or ‘Powell’)

marginalize(funs, n, params)[source]

Compute the approximate marginal functions funs by approximately marginalizing over the GP hyperparameters.

Parameters :

funs : list

List of functions for which to compute the marginal.

n : int

Number of samples to take when doing the approximate marginalization

params : list or tuple

List of parameters to marginalize over

nc = None

Number of candidate points

ns = None

Number of observations

nsc = None

Number of observations plus candidates

plot(f_l=None, xmin=None, xmax=None)[source]
plot_expected_squared_mean(ax, xmin=None, xmax=None)[source]
plot_expected_variance(ax, xmin=None, xmax=None)[source]
plot_gp_l(ax, f_l=None, xmin=None, xmax=None)[source]
plot_gp_log_l(ax, f_l=None, xmin=None, xmax=None)[source]
plot_l(ax, f_l=None, xmin=None, xmax=None, legend=True)[source]
sample_hypers(params, n=1, nburn=10)[source]

Use slice sampling to samples new hyperparameters for the two GPs. Note that this will probably cause \(\bar{ell}(x_{sc})\) to change.

Parameters :

params : list of strings

Dictionary of parameter names to be sampled

nburn : int

Number of burn-in samples

tl_s = None

Vector of log-transformed observed values

x_c = None

Vector of candidate locations

x_s = None

Vector of observed locations

x_sc = None

Vector of observed plus candidate locations

Previous topic

Bayesian Quadrature

Next topic

Utilities

This Page