Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better support for prior sensitivity checks #883

Open
aloctavodia opened this issue Jan 29, 2025 · 0 comments
Open

Better support for prior sensitivity checks #883

aloctavodia opened this issue Jan 29, 2025 · 0 comments

Comments

@aloctavodia
Copy link
Collaborator

In this paper, Kallioinen et al. propose a method to evaluate the sensitivity of the posterior distribution to the prior distribution and an associated R package priorsense.

Here you will find a brief description of the method, but the paper is very accessible, and I recommend reading it. The basic idea is to power-scale the priors (and/or likelihood) and evaluate the sensitivity of the posterior distribution to these changes. This can be done for all parameters simultaneously or for a subset.

On the Python side, some of this functionality is already available in old ArviZ, but ArviZ 1.0 implements a more complete version of this method and soon will be on par with the functionality on the R package. See:

https://arviz-stats.readthedocs.io/en/latest/api/generated/arviz_stats.psense.html
https://arviz-stats.readthedocs.io/en/latest/api/generated/arviz_stats.psense_summary.html
https://arviz-plots.readthedocs.io/en/latest/gallery/plot_psense.html

As usual with Bambi, we can interact with ArviZ simply by passing the InferenceData to the correct ArviZ function and this is no exception for the functions above. However, we may want to offer tighter integration in the future. Why? Because for some models it makes sense to power-scale all parameters simultaneously. But not for all, and we can automate this process in Bambi.

Let me, explain, for certain models, priors should not be power-scaled as they are not directly interpretable, think for instance coefficient from splines. For others, we should selectively power-scaled. For example, in hierarchical models, we only want to power-scale the top-level parameter. This is because power-scaling both top- and intermediate-level priors will lead to "double" power-scaling. To illustrate this, consider two forms of prior, a non-hierarchical prior with two independent parameters p(θ) p(φ) and a hierarchical prior of the form p(θ | φ) p(φ). In the first case, the appropriate power-scaling for the prior is p(φ)^α p(θ )^α , while in the second, only the top level prior should be power-scaled, that is, p(θ | φ) p(φ)^α

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant