Skip to content
This repository has been archived by the owner on Dec 3, 2024. It is now read-only.

Releases: IntelLabs/bayesian-torch

Bayesian-Torch 0.5.0

03 Jan 00:08
Compare
Choose a tag to compare

This release includes support for quantization of all the Bayesian Convolutional layers listed below in addition to Conv2dReparameterization and Conv2dFlipout.

Conv1dReparameterization,
Conv3dReparameterization,
ConvTranspose1dReparameterization,
ConvTranspose2dReparameterization,
ConvTranspose3dReparameterization,
Conv1dFlipout,
Conv3dFlipout,
ConvTranspose1dFlipout,
ConvTranspose2dFlipout,
ConvTranspose3dFlipout

This release also includes the fixes for the following issues:
Issue #27
Issue #21
Issue #24
Issue #34

What's Changed

  • Add quant prepare functions 342ca39
  • Fix bug in post-training quantization evaluation due to Jit trace f5c7126
  • Add quantization example for ImageNet/ResNet-50 3e74914
  • Correcting the order of group and dilation parameters in Conv transpose layers 97ba16a

Full Changelog: v0.4.0...v0.5.0

Bayesian-Torch 0.4.0

27 Jul 14:16
Compare
Choose a tag to compare

This release introduces Quantization framework for Bayesian neural networks in Bayesian-Torch.
Support Post Training Quantization of Bayesian deep neural networks, enables optimized and efficient INT8 inference on Intel platforms.

What's Changed

  • Add support for Bayesian neural network quantization | PR: #23
  • Include example for performing post training quantization of Bayesian neural network models | commit c3e9a0f
  • Add support for output padding in flipout layers | PR: #20

Contributors

Full Changelog: v0.3.0...v0.4.0

Bayesian-Torch 0.3.0

14 Dec 05:13
Compare
Choose a tag to compare

support arbitrary kernel sizes in the Bayesian convolutional layers

v0.2.1

27 Jan 00:04
bdd262e
Compare
Choose a tag to compare
v0.2.1 Pre-release
Pre-release

Includes dnn_to_bnn new feature:
An API to convert deterministic deep neural network (dnn) model of any architecture to Bayesian deep neural network (bnn) model, simplifying the model definition i.e. drop-in replacements of Convolutional, Linear and LSTM layers to corresponding Bayesian layers. This will enable seamless conversion of existing topology of larger models to Bayesian deep neural network models for extending towards uncertainty-aware applications.

v0.2.0-pre

27 Jan 00:00
bdd262e
Compare
Choose a tag to compare
v0.2.0-pre Pre-release
Pre-release

Includes dnn_to_bnn new feature:
An API to convert deterministic deep neural network (dnn) model of any architecture to Bayesian deep neural network (bnn) model, simplifying the model definition i.e. drop-in replacements of Convolutional, Linear and LSTM layers to corresponding Bayesian layers. This will enable seamless conversion of existing topology of larger models to Bayesian deep neural network models for extending towards uncertainty-aware applications.

Full Changelog: v0.1...v0.2.0

v0.1

16 Dec 19:25
daa7292
Compare
Choose a tag to compare
v0.1 Pre-release
Pre-release
Merge pull request #9 from piEsposito/main

Let the models return prediction only, saving KL Divergence as an attribute