You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add MKL-DNN Int8 support, especially for VNNI acceleration support. Low precision inference accelerates both latency and throughput significantly
Add support for runnning MKL-BLAS models under MKL-DNN. We leverage MKL-DNN to speed up both training and inference for MKL-BLAS models
Add Spark 2.4 support. Our examples and APIs are fully compatible with Spark 2.4, we released the binary for Spark 2.4 together with other Spark versions
Details
[New Feature] Add MKL-DNN Int8 support, especially for VNNI support
[New Feature] Add support for runnning MKL-BLAS models under MKL-DNN
[New Feature] Add Spark 2.4 support
[New Feature] Add auto fusion to speed up model inference
[New Feature] Memoery reorder support for low precision inference
[New Feature] Add bytes support for DNN Tensor
[New Feature] Add SAME padding in MKL-DNN layers
[New Feature] Add combined (add/or) triggers for training completion
[Enhancement] Inception-V1 python training support enhancement
[Enhancement] Distributed Optimizer enhancement to support customized optimizer
[Enhancement] Add compute output shape for DNN supported layers
[Enhancement] New MKL-DNN computing thread pool
[Enhancement] Add MKL-DNN support for Predictor
[Enhancement] Documentation enhancement for Sparse Tensor, MKL-DNN support, etc
[Enhancement] Add ceilm mode for AvgPooling and MaxPooling layers
[Enhacement] Add binary classification support for DLClassifierModel
[Enhacement] Improvement to support conversion between NHWC and NCHW for memory reoder
[Bug Fix] Fix SoftMax layer with narrowed input
[Bug Fix] TensorFlow loader to support checking all data types
[Bug Fix] Fix Add operation bug to support double type when loading TensorFlow graph
[Bug Fix] Fix one-step weight update missing issue in validation during training