Skip to content

Commit

Permalink
update tutorials
Browse files Browse the repository at this point in the history
  • Loading branch information
michaeldeistler committed Nov 3, 2023
1 parent 24c3510 commit bfe2d2c
Show file tree
Hide file tree
Showing 8 changed files with 150 additions and 498 deletions.
12 changes: 10 additions & 2 deletions docs/docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,21 @@

![using sbi](static/infer_demo.gif)

Inference can be run in a single line of code:
Inference can be run in a single line of code

```python
posterior = infer(simulator, prior, method='SNPE', num_simulations=1000)
```

and you can choose from a variety of _amortized_ and _sequential_ SBI methods.
or in a few lines for more flexibility:

```python
inference = SNPE(prior=prior)
_ = inference.append_simulations(theta, x).train()
posterior = inference.build_posterior()
```

`sbi` lets you choose from a variety of _amortized_ and _sequential_ SBI methods:

Amortized methods return a posterior that can be applied to many different observations without retraining,
whereas sequential methods focus the inference on one particular observation to be more simulation-efficient.
Expand Down
7 changes: 3 additions & 4 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,17 @@ nav:
- Tutorials and Examples:
- Introduction:
- Getting started: tutorial/00_getting_started.md
- Amortized inference: tutorial/01_gaussian_amortized.md
- Flexible interface: tutorial/02_flexible_interface.md
- Sampler interface: tutorial/11_sampler_interface.md
- Amortized inference: tutorial/01_gaussian_amortized.md
- Implemented algorithms: tutorial/16_implemented_methods.md
- Advanced:
- Multi-round inference: tutorial/03_multiround_inference.md
- Using Variational Inference for Building Posteriors: tutorial/17_vi_posteriors.md
- Sampling algorithms in sbi: tutorial/11_sampler_interface.md
- Custom density estimators: tutorial/04_density_estimators.md
- Learning summary statistics: tutorial/05_embedding_net.md
- SBI with trial-based data: tutorial/14_iid_data_and_permutation_invariant_embeddings.md
- Handling invalid simulations: tutorial/08_restriction_estimator.md
- Crafting summary statistics: tutorial/10_crafting_summary_statistics.md
- SBI with trial-based data: tutorial/14_iid_data_and_permutation_invariant_embeddings.md
- Diagnostics:
- Posterior predictive checks: tutorial/12_diagnostics_posterior_predictive_check.md
- Simulation-based calibration: tutorial/13_diagnostics_simulation_based_calibration.md
Expand Down
75 changes: 2 additions & 73 deletions tutorials/00_getting_started.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,6 @@
"num_dim = 3\n",
"prior = utils.BoxUniform(low=-2 * torch.ones(num_dim), high=2 * torch.ones(num_dim))\n",
"\n",
"\n",
"def simulator(parameter_set):\n",
" return 1.0 + parameter_set + torch.randn(parameter_set.shape) * 0.1"
]
Expand Down Expand Up @@ -96,6 +95,7 @@
}
],
"source": [
"# Other methods are \"SNLE\" or \"SNRE\".\n",
"posterior = infer(simulator, prior, method=\"SNPE\", num_simulations=1000)"
]
},
Expand Down Expand Up @@ -166,78 +166,7 @@
"source": [
"## Next steps\n",
"\n",
"The single-line interface described above provides an easy entry for using `sbi`. However, if you are working on a larger project or need additional features, we strongly recommend using the [flexible interface](https://www.mackelab.org/sbi/tutorial/02_flexible_interface/)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Requirements for the simulator, prior, and observation\n",
"\n",
"In the interface described above, you need to provide a prior and a simulator for training. Let's talk about what requirements they need to satisfy."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"### Prior\n",
"A prior is a distribution object that allows to sample parameter sets. Any class for the prior is allowed as long as it allows to call `prior.sample()` and `prior.log_prob()`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simulator\n",
"The simulator is a Python callable that takes in a parameter set and outputs data with some (even if very small) stochasticity.\n",
"\n",
"Allowed data types and shapes for input and output:\n",
"\n",
"- the input parameter set and the output have to be either a `np.ndarray` or a `torch.Tensor`. \n",
"- the input parameter set should have either shape `(1,N)` or `(N)`, and the output must have shape `(1,M)` or `(M)`.\n",
"\n",
"You can call simulators not written in Python as long as you wrap them in a Python function."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Observation\n",
"Once you have a trained posterior, you will want to evaluate or sample the posterior $p(\\theta|x_o)$ at certain observed values $x_o$:\n",
"\n",
"- The allowable data types are either Numpy `np.ndarray` or a torch `torch.Tensor`.\n",
"- The shape must be either `(1,M)` or just `(M)`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Running different algorithms\n",
"\n",
"`sbi` implements three classes of algorithms that can be used to obtain the posterior distribution: SNPE, SNLE, and SNRE. You can try the different algorithms by simply swapping out the `method`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"posterior = infer(simulator, prior, method=\"SNPE\", num_simulations=1000)\n",
"posterior = infer(simulator, prior, method=\"SNLE\", num_simulations=1000)\n",
"posterior = infer(simulator, prior, method=\"SNRE\", num_simulations=1000)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can then infer, sample, evaluate, and plot the posterior as described above."
"The single-line interface described above provides an easy entry for using `sbi`. However, on almost any real-world problem that goes beyond a simple demonstration, we strongly recommend using the [flexible interface](https://www.mackelab.org/sbi/tutorial/02_flexible_interface/)."
]
}
],
Expand Down
36 changes: 5 additions & 31 deletions tutorials/01_gaussian_amortized.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -74,36 +74,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can then run inference:"
"We can then run inference (either with the simple interface of with the flexible interface):"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "574bca59922f4e68a7dcf8f5afe3b372",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Running 1000 simulations.: 0%| | 0/1000 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" Neural network successfully converged after 97 epochs."
]
}
],
"outputs": [],
"source": [
"posterior = infer(linear_gaussian, prior, \"SNPE\", num_simulations=1000)"
]
Expand All @@ -129,12 +107,8 @@
"metadata": {},
"outputs": [],
"source": [
"x_o_1 = torch.zeros(\n",
" 3,\n",
")\n",
"x_o_2 = 2.0 * torch.ones(\n",
" 3,\n",
")"
"x_o_1 = torch.zeros(3,)\n",
"x_o_2 = 2.0 * torch.ones(3,)"
]
},
{
Expand Down
4 changes: 1 addition & 3 deletions tutorials/02_flexible_interface.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -244,9 +244,7 @@
"metadata": {},
"outputs": [],
"source": [
"x_o = torch.zeros(\n",
" 3,\n",
")"
"x_o = torch.zeros(3,)"
]
},
{
Expand Down
116 changes: 111 additions & 5 deletions tutorials/11_sampler_interface.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# The sampler interface\n",
"# Sampling algorithms in `sbi`\n",
"\n",
"Note: this tutorial requires that the user is already familiar with the [flexible interface](https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/).\n",
"\n",
Expand All @@ -16,14 +16,75 @@
"\n",
"- Variational inference (VI)\n",
"\n",
"When using the flexible interface, the sampler as well as its attributes can be set with `sample_with=\"mcmc\"`, `mcmc_method=\"slice_np\"`, and `mcmc_parameters={}`. However, for full flexibility in customizing the sampler, we recommend using the **sampler interface**. This interface is described here. Further details can be found [here](https://github.com/sbi-dev/sbi/pull/573)."
"Below, we will demonstrate how these samplers can be used in `sbi`. First, we train the neural network as always:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"from sbi.inference import SNLE\n",
"\n",
"# dummy Gaussian simulator for demonstration\n",
"num_dim = 2\n",
"prior = torch.distributions.MultivariateNormal(torch.zeros(num_dim), torch.eye(num_dim))\n",
"theta = prior.sample((1000,))\n",
"x = theta + torch.randn((1000, num_dim))\n",
"x_o = torch.randn((1, num_dim))\n",
"\n",
"inference = SNLE(prior=prior, show_progress_bars=False)\n",
"likelihood_estimator = inference.append_simulations(theta, x).train()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Main syntax for SNLE"
"And then we pass the options for which sampling method to use to the `build_posterior()` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Sampling with MCMC\n",
"sampling_algorithm = \"mcmc\"\n",
"mcmc_method = \"slice_np\" # or nuts, or hmc\n",
"posterior = inference.build_posterior(sample_with=sampling_algorithm, mcmc_method=mcmc_method)\n",
"\n",
"# Sampling with variational inference\n",
"sampling_algorithm = \"vi\"\n",
"vi_method = \"rKL\" # or fKL\n",
"posterior = inference.build_posterior(sample_with=sampling_algorithm, vi_method=vi_method)\n",
"# Unlike other methods, vi needs a training step for every observation.\n",
"posterior = posterior.set_default_x(x_o).train()\n",
"\n",
"# Sampling with rejection sampling\n",
"sampling_algorithm = \"rejection\"\n",
"posterior = inference.build_posterior(sample_with=sampling_algorithm)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# More flexibility in adjusting the sampler\n",
"\n",
"With the above syntax, you can easily try out different sampling algorithms. However, in many cases, you might want to customize your sampler. Below, we demonstrate how you can change hyperparameters of the samplers (e.g. number of warm-up steps of MCMC) or how you can write your own sampler from scratch."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Main syntax (for SNLE and SNRE)\n",
"\n",
"As above, we begin by training the neural network as always:"
]
},
{
Expand All @@ -43,7 +104,6 @@
"import torch\n",
"\n",
"from sbi.inference import SNLE\n",
"from sbi.inference import likelihood_estimator_based_potential, MCMCPosterior\n",
"\n",
"# dummy Gaussian simulator for demonstration\n",
"num_dim = 2\n",
Expand All @@ -53,16 +113,62 @@
"x_o = torch.randn((1, num_dim))\n",
"\n",
"inference = SNLE(show_progress_bars=False)\n",
"likelihood_estimator = inference.append_simulations(theta, x).train()\n",
"likelihood_estimator = inference.append_simulations(theta, x).train()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, for full flexibility on using the sampler, we do not use the `.build_posterior()` method, but instead we explicitly define the potential function and the sampling algorithm (see below for explanation):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sbi.inference import likelihood_estimator_based_potential, MCMCPosterior\n",
"\n",
"potential_fn, parameter_transform = likelihood_estimator_based_potential(\n",
" likelihood_estimator, prior, x_o\n",
")\n",
"posterior = MCMCPosterior(\n",
" potential_fn, proposal=prior, theta_transform=parameter_transform, warmup_steps=10\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to use variational inference or rejection sampling, you have to replace the last line with `VIPosterior` or `RejectionPosterior`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# For VI, we have to train.\n",
"posterior = VIPosterior(\n",
" potential_fn, proposal=prior, theta_transform=parameter_transform\n",
").train()\n",
"\n",
"posterior = RejectionPosterior(\n",
" potential_fn, proposal=prior, theta_transform=parameter_transform\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At this point, you could also plug the `potential_fn` into any sampler of your choice and not rely on any of the in-built `sbi`-samplers."
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down
Loading

0 comments on commit bfe2d2c

Please sign in to comment.