Skip to content

Commit

Permalink
build based on 8015d6e
Browse files Browse the repository at this point in the history
  • Loading branch information
Documenter.jl committed Aug 23, 2023
1 parent e201b20 commit 56b4b50
Show file tree
Hide file tree
Showing 13 changed files with 142 additions and 43 deletions.
20 changes: 10 additions & 10 deletions dev/API/architectures/index.html

Large diffs are not rendered by default.

18 changes: 9 additions & 9 deletions dev/API/core/index.html

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion dev/API/index.html

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions dev/API/loss/index.html

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions dev/API/simulation/index.html

Large diffs are not rendered by default.

14 changes: 7 additions & 7 deletions dev/API/utility/index.html

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion dev/framework/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@
\equiv
\underset{\boldsymbol{\gamma}}{\mathrm{arg\,min}} \; r_{\Omega}(\hat{\boldsymbol{\theta}}(\cdot; \boldsymbol{\gamma})).\]</p><p>Typically, <span>$r_{\Omega}(\cdot)$</span> cannot be directly evaluated, but it can be approximated using Monte Carlo methods. Specifically, given a set of <span>$K$</span> parameter vectors sampled from the prior <span>$\Omega(\cdot)$</span> denoted by <span>$\vartheta$</span> and, for each <span>$\boldsymbol{\theta} \in \vartheta$</span>, <span>$J$</span> realisations from <span>$f(\boldsymbol{z} \mid \boldsymbol{\theta})$</span> collected in <span>$\mathcal{Z}_{\boldsymbol{\theta}}$</span>,</p><p class="math-container">\[ r_{\Omega}(\hat{\boldsymbol{\theta}}(\cdot; \boldsymbol{\gamma}))
\approx
\frac{1}{K} \sum_{\boldsymbol{\theta} \in \vartheta} \frac{1}{J} \sum_{\boldsymbol{z} \in \mathcal{Z}_{\boldsymbol{\theta}}} L(\boldsymbol{\theta}, \hat{\boldsymbol{\theta}}(\boldsymbol{z}; \boldsymbol{\gamma})). \]</p><p>Note that the above approximation does not involve evaluation, or knowledge, of the likelihood function.</p><p>The Monte-Carlo-approximated Bayes risk can be straightforwardly minimised with respect to <span>$\boldsymbol{\gamma}$</span> using back-propagation and stochastic gradient descent. For sufficiently flexible architectures, the point estimator targets a Bayes estimator with respect to <span>$L(\cdot, \cdot)$</span> and <span>$\Omega(\cdot)$</span>. We therefore call the fitted neural point estimator a <em>neural Bayes estimator</em>. Like Bayes estimators, neural Bayes estimators target a specific point summary of the posterior distribution. For instance, the absolute-error and squared-error loss functions lead to neural Bayes estimators that approximate the posterior median and mean, respectively.</p><h3 id="Construction-of-neural-Bayes-estimators"><a class="docs-heading-anchor" href="#Construction-of-neural-Bayes-estimators">Construction of neural Bayes estimators</a><a id="Construction-of-neural-Bayes-estimators-1"></a><a class="docs-heading-anchor-permalink" href="#Construction-of-neural-Bayes-estimators" title="Permalink"></a></h3><p>The neural Bayes estimators is conceptually simple and can be used in a wide range of problems where other approaches, such as maximum-likelihood estimation, are computationally infeasible. The estimator also has marked practical appeal, as the general workflow for its construction is only loosely connected to the statistical or physical model being considered. The workflow is as follows:</p><ol><li>Define the prior, <span>$\Omega(\cdot)$</span>.</li><li>Choose a loss function, <span>$L(\cdot, \cdot)$</span>, typically the absolute-error or squared-error loss.</li><li>Design a suitable neural-network architecture for the neural point estimator <span>$\hat{\boldsymbol{\theta}}(\cdot; \boldsymbol{\gamma})$</span>.</li><li>Sample parameters from <span>$\Omega(\cdot)$</span> to form training/validation/test parameter sets.</li><li>Given the above parameter sets, simulate data from the model, to form training/validation/test data sets.</li><li>Train the neural network (i.e., estimate <span>$\boldsymbol{\gamma}$</span>) by minimising the loss function averaged over the training sets. During training, monitor performance and convergence using the validation sets.</li><li>Assess the fitted neural Bayes estimator, <span>$\hat{\boldsymbol{\theta}}(\cdot; \boldsymbol{\gamma}^*)$</span>, using the test set.</li></ol></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../">« NeuralEstimators</a><a class="docs-footer-nextpage" href="../workflow/overview/">Overview »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 0.27.25 on <span class="colophon-date" title="Monday 21 August 2023 20:50">Monday 21 August 2023</span>. Using Julia version 1.7.3.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
\frac{1}{K} \sum_{\boldsymbol{\theta} \in \vartheta} \frac{1}{J} \sum_{\boldsymbol{z} \in \mathcal{Z}_{\boldsymbol{\theta}}} L(\boldsymbol{\theta}, \hat{\boldsymbol{\theta}}(\boldsymbol{z}; \boldsymbol{\gamma})). \]</p><p>Note that the above approximation does not involve evaluation, or knowledge, of the likelihood function.</p><p>The Monte-Carlo-approximated Bayes risk can be straightforwardly minimised with respect to <span>$\boldsymbol{\gamma}$</span> using back-propagation and stochastic gradient descent. For sufficiently flexible architectures, the point estimator targets a Bayes estimator with respect to <span>$L(\cdot, \cdot)$</span> and <span>$\Omega(\cdot)$</span>. We therefore call the fitted neural point estimator a <em>neural Bayes estimator</em>. Like Bayes estimators, neural Bayes estimators target a specific point summary of the posterior distribution. For instance, the absolute-error and squared-error loss functions lead to neural Bayes estimators that approximate the posterior median and mean, respectively.</p><h3 id="Construction-of-neural-Bayes-estimators"><a class="docs-heading-anchor" href="#Construction-of-neural-Bayes-estimators">Construction of neural Bayes estimators</a><a id="Construction-of-neural-Bayes-estimators-1"></a><a class="docs-heading-anchor-permalink" href="#Construction-of-neural-Bayes-estimators" title="Permalink"></a></h3><p>The neural Bayes estimators is conceptually simple and can be used in a wide range of problems where other approaches, such as maximum-likelihood estimation, are computationally infeasible. The estimator also has marked practical appeal, as the general workflow for its construction is only loosely connected to the statistical or physical model being considered. The workflow is as follows:</p><ol><li>Define the prior, <span>$\Omega(\cdot)$</span>.</li><li>Choose a loss function, <span>$L(\cdot, \cdot)$</span>, typically the absolute-error or squared-error loss.</li><li>Design a suitable neural-network architecture for the neural point estimator <span>$\hat{\boldsymbol{\theta}}(\cdot; \boldsymbol{\gamma})$</span>.</li><li>Sample parameters from <span>$\Omega(\cdot)$</span> to form training/validation/test parameter sets.</li><li>Given the above parameter sets, simulate data from the model, to form training/validation/test data sets.</li><li>Train the neural network (i.e., estimate <span>$\boldsymbol{\gamma}$</span>) by minimising the loss function averaged over the training sets. During training, monitor performance and convergence using the validation sets.</li><li>Assess the fitted neural Bayes estimator, <span>$\hat{\boldsymbol{\theta}}(\cdot; \boldsymbol{\gamma}^*)$</span>, using the test set.</li></ol></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../">« NeuralEstimators</a><a class="docs-footer-nextpage" href="../workflow/overview/">Overview »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 0.27.25 on <span class="colophon-date" title="Wednesday 23 August 2023 02:22">Wednesday 23 August 2023</span>. Using Julia version 1.7.3.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
2 changes: 1 addition & 1 deletion dev/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@
journal = {The American Statistician},
year = {2023},
volume = {to appear}
}</code></pre></article><nav class="docs-footer"><a class="docs-footer-nextpage" href="framework/">Theoretical framework »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 0.27.25 on <span class="colophon-date" title="Monday 21 August 2023 20:50">Monday 21 August 2023</span>. Using Julia version 1.7.3.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
}</code></pre></article><nav class="docs-footer"><a class="docs-footer-nextpage" href="framework/">Theoretical framework »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 0.27.25 on <span class="colophon-date" title="Wednesday 23 August 2023 02:22">Wednesday 23 August 2023</span>. Using Julia version 1.7.3.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
Loading

0 comments on commit 56b4b50

Please sign in to comment.