Releases
v0.4.0
Highlights
Supported all Keras layers, and support Keras 1.2.2 model loading. See keras-support for detail
Python 3.6 support
OpenCV support, and add a dozen of image transformer based on OpenCV
More layers/operations
New Features
Models & Layers & Operations & Loss function
Add layers for Keras: Cropping2D, Cropping3D, UpSampling1D, UpSampling2D, UpSampling3D, masking,Maxout,HighWay,GaussianDropout, GaussianNoise, CAveTable, VolumetricAveragePooling, HardSigmoidSReLU, LocallyConnected1D, LocallyConnected2D, SpatialSeparableConvolution, ActivityRegularization, SpatialDropout1D, SpatialDropout2D, SpatialDropout3D
Add Criterion for keras: PoissonCriterion, KullbackLeiblerDivergenceCriterion, MeanAbsolutePercentageCriterion, MeanSquaredLogarithmicCriterion, CosineProximityCriterion
Support NHWC for LRN and BatchNormalization
Add LookupTableSparse (lookup table for multivalue)
Add activation argument for recurrent layers
Add MultiRNNCell
Add SpatialSeparableConvolution
Add MSRA filler
Support SAME padding in 3d conv and allows user config padding size in convlstm and convlstm3d
TF opteration: SegmentSum, conv3d related operations, Dilation2D, Dilation2DBackpropFilter, Dilation2DBackpropInput, Digamma, Erf, Erfc, Lgamma, TanhGrad, depthwise, Rint, All, Any, Range, Exp, Expm1, Round, FloorDiv, TruncateDiv, Mod, FloorMod, TruncateMod, IntopK, Round, Maximum, Minimum, BatchMatMu, Sqrt, SqrtGrad, Square, RsqrtGrad, AvgPool, AvgPoolGrad, BiasAddV1, SigmoidGrad, Relu6, Relu6Grad, Elu, EluGrad, Softplus, SoftplusGrad, LogSoftmax, Softsign, SoftsignGrad, Abs, LessEqual, GreaterEqual, ApproximateEqual, Log, LogGrad, Log1p, Log1pGrad, SquaredDifference, Div, Ceil, Inv, InvGrad, IsFinite, IsInf, IsNan, Sign, TopK. See details at tensorflow_ops_list )
Add object detection related layers: PriorBox, NormalizeScale, Proposal, DetectionOutputSSD, DetectionOutputFrcnn, Anchor
Transformer
Add image Transformer based on OpenCV: Resize, Brightness, ChannelOrder, Contrast, Saturation, Hue, ChannelNormalize, PixelNormalize, RandomCrop, CenterCrop, FixedCrop, DetectionCrop, Expand, Filler, ColorJitter, RandomSampler, MatToFloats, AspectScale, RandomAspectScale, BytesToMat
Add Transformer: RandomTransformer, RoiProject, RoiHFlip, RoiResize, RoiNormalize
API change
Add predictImage function in LocalPredictor
Add partition number option for ImageFrame read
Add an API to get node from graph model with given name
Support List of JTensors for label in Python API
Expose local optimizer and predictor in Python API
Install & Deploy
Model Save/Load
Support big-sized model (parameter exceed > 2.1G) for both java and protobuffer
Support keras model loading
Training
Allow user to set new train data or new criterion for optimizer reusing
Support gradient clipping (constant clip and clip by L2-norm)
Enhancement
Speed up BatchNormalization.
Speed up MSECriterion
Speed up Adam
Speed up static graph execution
Support reading TFRecord files from HDFS
Support reading raw binary files from HDFS
Check input size in concat layer
Add proper exception handling for CaffeLoader&Persister
Add serialization support for multiple tensor numeric
Add an Activity wrapper for Python to simplify the returning value
Override joda-time in hadoop-aws to reduce compile time
LocalOptimizer-use modelbroadcast-like method to clone module
Time counting for paralleltable's forward/backward
Use shade to package jar-with-dependencies to manage some package conflict
Support loading bigdl_conf_file in multiple python zip files
Bug Fix
Fix getModel failed in DistriOptimizer when model parameters exceed 2.1G
Fix core number is 0 where there's only one core in system
Fix SparseJoinTable throw exception if input’s nElement changed.
Fix some issues found when save bigdl model to tensorflow format file
Fix return object type error of DLClassifier.transform in Python
Fix graph generatebackward is lost in serialization
Fix resizing tensor to empty tensor doesn’t work properly
Fix Adapter layer does not support different batch size at runtime
Fix Adaper layer cannot be serialized directly
Fix calling wrong function when set user-defined mkl threads
Fix SmoothL1Criterion and SoftmaxWithCriterion doesn’t deal with input’s offset.
Fix L1Regularization throw NullPointerException while broadcasting model.
Fix CMul layer will crash for certain configure
You can’t perform that action at this time.