This repository has been archived by the owner on Dec 3, 2024. It is now read-only.
Bayesian-Torch 0.4.0
This release introduces Quantization framework for Bayesian neural networks in Bayesian-Torch.
Support Post Training Quantization of Bayesian deep neural networks, enables optimized and efficient INT8 inference on Intel platforms.
What's Changed
- Add support for Bayesian neural network quantization | PR: #23
- Include example for performing post training quantization of Bayesian neural network models | commit c3e9a0f
- Add support for output padding in flipout layers | PR: #20
Contributors
- @junliang-lin made their first contribution in #20
- @ranganathkrishnan
Full Changelog: v0.3.0...v0.4.0