Skip to content

Commit

Permalink
Guidelines with MH example and revised skeletons
Browse files Browse the repository at this point in the history
  • Loading branch information
albcab committed Feb 21, 2024
1 parent 2946c53 commit 6f94b94
Show file tree
Hide file tree
Showing 5 changed files with 47 additions and 31 deletions.
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,10 @@ information related to the transition are returned separately. They can thus be
easily composed and exchanged. We specialize these kernels by closure instead of
passing parameters.

### New algorithms

We hope to make implementing and testing new algorithms easy with BlackJAX. Many basic methods are already implemented in the library, and you can use them to test new algorithms. Follow the [guidelines](https://blackjax-devs.github.io/blackjax/developer/guidelines.html) to implement your own method and test new ideas on existing methods without writing everything from scratch!

## Contributions

Please follow our [short guide](https://github.com/blackjax-devs/blackjax/blob/main/CONTRIBUTING.md).
Expand Down
12 changes: 6 additions & 6 deletions docs/developer/approximate_inf_algorithm.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@


class ApproxInfState(NamedTuple):
"""State of the approximate inference algorithm.
"""State of your approximate inference algorithm.
Give an overview of the variables needed at each step and for sampling.
"""
Expand All @@ -41,9 +41,9 @@ class ApproxInfState(NamedTuple):


class ApproxInfInfo(NamedTuple):
"""Additional information on the algorithm transition.
"""Additional information on your algorithm transition.
Given an overview of the collected values at each step of the approximation.
Give an overview of the collected values at each step of the approximation.
"""

...
Expand All @@ -63,7 +63,7 @@ def step(
*args,
**kwargs,
) -> Tuple[ApproxInfState, ApproxInfInfo]:
"""Approximate the target density using the some approximation.
"""Approximate the target density using your approximation.
Parameters
----------
Expand All @@ -81,14 +81,14 @@ def step(


def sample(rng_key: PRNGKey, state: ApproxInfState, num_samples: int = 1):
"""Sample from the approximation."""
"""Sample from your approximation."""
# the sample should be a PyTree of the same structure as the `position` in the init function
samples = ...
return samples


class approx_inf_algorithm:
"""Implements the (basic) user interface for the approximate inference method.
"""Implements the (basic) user interface for your approximate inference method.
Describe in detail the inner mechanism of the method and its use.
Expand Down
36 changes: 19 additions & 17 deletions docs/developer/guidelines.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,43 @@
# Developer Guidelines

## Style
In its broadest sense, an algorithm that belongs in the blackjax library should approximate integrals on a probability space. An introduction to probability theory is outside the scope of this document, but the Monte Carlo method is ever-present and important to understand. In simple terms, we want to approximate an integral with a sum. To do this, generate samples with probabilities defined by a density (continuous variable) or measure (discrete variable) function. The idea is to sample more from areas with higher probability but also from areas with low probability, just at a lower rate. You can also approximate the target density directly, using an approximation that is easier to handle, then do inference, i.e. solve integrals, with the approximation directly and use importance sampling to correct its bias.
In the broadest sense, an algorithm that belongs in the BlackJAX library should provide the tools to approximate integrals on a probability space. An introduction to probability theory is outside the scope of this document, but the Monte Carlo method is ever-present and important to understand. In simple terms, we want to approximate an integral with a sum. To do this, generate samples with [relative likelihood](https://en.wikipedia.org/wiki/Relative_likelihood) given by a target probability density function (known up to a normalization constant). The idea is to sample more from areas with higher likelihood but also from areas with low likelihood, just at a lower rate. You can also approximate the target density directly, using a density that is tractable and easy to sample from, then do inference with the approximation instead of the target, potentially using [importance sampling](https://en.wikipedia.org/wiki/Importance_sampling) to correct the approximation error.

In the following section, we’ll explain blackjax’s design of different algorithms for Monte Carlo integration. Keep in mind some basic principles:
In the following section, we’ll explain BlackJAX’s design of different algorithms for Monte Carlo integration. Keep in mind some basic principles:

Leverage JAX's unique strengths: functional programming and composable function-transformation approach.
Write small and general functions, compose them to create complex methods, reuse the same building blocks for similar algorithms.
Consider compatibility with the broader JAX ecosystem (Flax, Optax, GPJax).
Write code that is easy to read and understand.
Write code that is well documented, describe in detail the inner mechanism of the algorithm and its use.
- Leverage JAX's unique strengths: functional programming and composable function-transformation approach.
- Write small and general functions, compose them to create complex methods, reuse the same building blocks for similar algorithms.
- Consider compatibility with the broader JAX ecosystem (Flax, Optax, GPJax).
- Write code that is easy to read and understand.
- Write code that is well documented, describe in detail the inner mechanism of the algorithm and its use.

## Core implementation
There are three types of sampling algorithms blackjax currently supports: Markov Chain Monte Carlo (MCMC), Sequential Monte Carlo (SMC), and Stochastic Gradient MCMC (SGMCMC); and one type of approximate inference algorithm: Variational Inference (VI). Additionally, blackjax supports adaptation algorithms that efficiently tune the hyperparameters of sampling algorithms, usually aimed at reducing autocorrelation between sequential samples.
There are three types of sampling algorithms BlackJAX currently supports: Markov Chain Monte Carlo (MCMC), Sequential Monte Carlo (SMC), and Stochastic Gradient MCMC (SGMCMC); and one type of approximate inference algorithm: Variational Inference (VI). Additionally, BlackJAX supports adaptation algorithms that efficiently tune the hyperparameters of sampling algorithms, usually aimed at reducing autocorrelation between sequential samples.

Basic components are functions, which do specific tasks but are generally applicable, used to build all inference algorithms. When implementing a new inference algorithm, you should first break it down to its basic components, then find and use all that are already implemented *before* writing your own. A recurrent example is the Metropolis-Hastings step, a basic component used by many MCMC algorithms to keep the target distribution invariant. In blackjax, this common accept/reject step done with two functions: first the Hastings ratio is calculated by creating a proposal using `mcmc.proposal.proposal_generator`, then the proposal is accepted or rejected using `mcmc.proposal.static_binomial_sampling`.
Basic components are functions which do specific tasks but are generally applicable, used to build all inference algorithms. When implementing a new inference algorithm you should first break it down to its basic components then find and use all that are already implemented *before* writing your own. A recurrent example is the [Metropolis-Hastings](https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm) step, a basic component used by many MCMC algorithms to keep the target distribution invariant. In BlackJAX there are two basic components that do a specific (but simpler) and a general version of this accept/reject step:

Because JAX operates on pure functions, inference algorithms always return a NamedTuple containing the necessary variables to generate the next sample. Arguably, abstracting the handling of these variables is the whole point of blackjax, so it must be done in a way that abstracts the uninteresting bookkeeping from the end user but allows her to access important variables at each step. The algorithms should also return a NamedTuple with important information of each iteration.
- Metropolis step: if the proposal transition kernel is symmetric, i.e. if the probability of going from the initial to the proposed position is always equal to the probability of going from the proposed to the initial position, the acceptance probability is calculated by creating a proposal using `mcmc.proposal.proposal_generator`, then the proposal is accepted or rejected using `mcmc.proposal.static_binomial_sampling`.
- Metropolis-Hastings step: for the more general case of an asymmetric proposal transition kernel, the acceptance probability is calculated by creating a proposal using `mcmc.proposal.asymmetric_proposal_generator`, then the proposal is accepted or rejected using `mcmc.proposal.static_binomial_sampling`.

When implementing an algorithm you could choose to replace the classic, reversible Metropolis-Hastings step with Neal's [non-reversible slice sampling](https://arxiv.org/abs/2001.11950) step by simply replacing `mcmc.proposal.static_binomial_sampling` with `mcmc.proposal.nonreversible_slice_sampling` on either of the previous implementations. Just make sure to carry over to the next iteration an updated slice, instead of passing a pseudo-random number generating key, for the slice sampling step!

The previous example illustrates the power of basic components, useful not only to avoid rewriting the same methods for each new algorithm but also useful to personalize and test new algorithms which replace some steps of common efficient algorithms. Like how `blackjax.mcmc.ghmc` is `blackjax.mcmc.hmc` with a persistent momentum and a non-reversible slice sampling step instead of the Metropolis-Hastings step.

Because JAX operates on pure functions, inference algorithms always return a `typing.NamedTuple` containing the necessary variables to generate the next sample. Arguably, abstracting the handling of these variables is the whole point of BlackJAX, so it must be done in a way that abstracts the uninteresting bookkeeping from the end user but allows her to access important variables at each step. The algorithms should also return a `typing.NamedTuple` with important information about each iteration.

The user-facing interface of a **sampling algorithm** should work like this:
```python
import blackjax
sampling_algorithm = blackjax.sampling_algorithm(logdensity_fn, *args, **kwargs)
state = sampling_algorithm.init(initial_position)
new_state, info = sampling_algorithm.step(rng_key, state)
```
Achieve this by building from the basic skeleton of a sampling algorithm (here)[https://github.com/blackjax-devs/blackjax/tree/main/docs/developer/sampling_algorithm.py]. Only the `sampling_algorithm` class and the `init` and `build_kernel` functions need to be in the final version of your algorithm, the rest might become useful but are not necessary.
Achieve this by building from the basic skeleton of a sampling algorithm [here](https://github.com/blackjax-devs/blackjax/tree/main/docs/developer/sampling_algorithm.py). Only the `sampling_algorithm` class and the `init` and `build_kernel` functions need to be in the final version of your algorithm, the rest might become useful but are not necessary.

The user-facing interface of an **approximate inference algorithm** should work like this:
```python
import blackjax
approx_inf_algorithm = blackjax.approx_inf_algorithm(logdensity_fn, optimizer, *args, **kwargs)
state = approx_inf_algorithm.init(initial_position)
new_state, info = approx_inf_algorithm.step(rng_key, state)
#user is able to build the approximate distribution using the state, or generate samples:
position_samples = approx_inf_algorithm.sample(rng_key, state, num_samples)
```
Achieve this by building from the basic skeleton of an approximate inference algorithm (here)[https://github.com/blackjax-devs/blackjax/tree/main/docs/developer/approximate_inf_algorithm.py]. Only the `approx_inf_algorithm` class and the `init`, `step` and `sample` functions need to be in the final version of your algorithm, the rest might become useful but are not necessary.

Well documented code is essential for a useful library. Start by decomposing your algorithm into basic components, finding those that are already implemented, then implement your own and build the high-level API from basic components.
Achieve this by building from the basic skeleton of an approximate inference algorithm [here](https://github.com/blackjax-devs/blackjax/tree/main/docs/developer/approximate_inf_algorithm.py). Only the `approx_inf_algorithm` class and the `init`, `step` and `sample` functions need to be in the final version of your algorithm, the rest might become useful but are not necessary.
24 changes: 17 additions & 7 deletions docs/developer/sampling_algorithm.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@


class SamplingAlgoState(NamedTuple):
"""State of the sampling algorithm.
"""State of your sampling algorithm.
Give an overview of the variables needed at each iteration of the model.
"""
Expand All @@ -41,7 +41,7 @@ class SamplingAlgoState(NamedTuple):


class SamplingAlgoInfo(NamedTuple):
"""Additional information on the algorithm transition.
"""Additional information on your algorithm transition.
Given an overview of the collected values at each iteration of the model.
"""
Expand All @@ -56,7 +56,7 @@ def init(position: PyTree, logdensity_fn: Callable, *args, **kwargs):


def build_kernel(*args, **kwargs):
"""Build a HMC kernel.
"""Build a your kernel.
Parameters
----------
Expand Down Expand Up @@ -92,7 +92,7 @@ def kernel(


class sampling_algorithm:
"""Implements the (basic) user interface for the sampling kernel.
"""Implements the (basic) user interface for your sampling kernel.
Describe in detail the inner mechanism of the algorithm and its use.
Expand Down Expand Up @@ -148,10 +148,20 @@ def sampling_algorithm_proposal(*args, **kwags) -> Callable:
-------
Describe what is returned.
"""
...
# as an example, a Metropolis-Hastings step would look like this:
init_proposal, generate_proposal = proposal.proposal_generator(...)
sample_proposal = proposal.static_binomial_sampling(...)

def generate(rng_key, state):
# propose a new sample
proposal_state = ...

# accept or reject the proposed sample
proposal = init_proposal(state)
new_proposal, is_diverging = generate_proposal(proposal.energy, proposal_state)
sampled_proposal, *info = sample_proposal(rng_key, proposal, new_proposal)

def generate(*args, **kwargs):
"""Generate a new chain state."""
# build a new state and collect useful information
sampled_state, info = ...

return sampled_state, info
Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,5 +141,5 @@ maxdepth: 1
caption: DEVELOPER DOCUMENTATION
hidden:
---
Guidelines<developer/principles.md>
Guidelines<developer/guidelines.md>
```

0 comments on commit 6f94b94

Please sign in to comment.