All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Fixed temporary fix huge number of learning group returned from C at
libtch/tensor.go AtoGetLearningRates
- Fixed incorrect
nn.AdamWConfig
and some documentation. - Fixed - reworked on
vision.ResNet
andvision.DenseNet
to fix incorrect layers and memory leak - Changed
dutil.DataLoader.Reset()
to reshuffle when resetting DataLoader if flag is true - Changed
dutil.DataLoader.Next()
. Deleted case batch size == 1 to make consistency by always returning items in a slice[]element dtype
even with batchsize = 1. - Added
nn.CrossEntropyLoss
andnn.BCELoss
- ctype
long
caused compiling error in MacOS as noted on #44. Not working on linux box.
- Fixed multiple memory leakage at
vision/image.go
- Fixed memory leakage at
dutil/dataloader.go
- Fixed multiple memory leakage at
efficientnet.go
- Added
dataloader.Len()
method - Fixed deleting input tensor inside function at
tensor/other.go
tensor.CrossEntropyForLogits
andtensor.AccuracyForLogits
- Added warning to
varstore.LoadPartial
when mismatched tensor shapes between source and varstore. - Fixed incorrect message mismatched tensor shape at
nn.Varstore.Load
- Fixed incorrect y -> x at
vision/aug/affine.go
getParam func - Fixed double free tensor at
vision/aug/function.go
Equalize func. - Changed
vision/aug
all input image should beuint8
(Byte) dtype and transformed output has the same dtype (uint8) so thatCompose()
can compose any transformer options. - Fixed wrong result of
aug.RandomAdjustSharpness
- Fixed memory leak at
aug/function.getAffineGrid
- Changed
vision/aug
and correct ColorJitter - Changed
vision/aug
and correct Resize - Changed
dutil/sampler
to accept batchsize from 1. - Fixed double free in
vision/image.go/resizePreserveAspectRatio
Skip this tag
Same as [0.3.10]
- Update installation at README.md
- [#38] fixed JIT model
- Added Optimizer Learning Rate Schedulers
- Added AdamW Optimizer
- #24, #26: fixed memory leak.
- #30: fixed varstore.Save() randomly panic - segmentfault
- #32: nn.Seq Forward return nil tensor if length of layers = 1
- [#36]: resolved image augmentation
- #20: fixed IValue.Value() method return
[]interface{}
instead of[]Tensor
- Added trainable JIT Module APIs and example/jit-train. Now, a Python Pytorch model
.pt
can be loaded then continue training/fine-tuning in Go.
- Added
dutil
sub-package that serves PytorchDataSet
andDataLoader
concepts
- Added function
gotch.CudaIfAvailable()
. NOTE that:device := gotch.NewCuda().CudaIfAvailable()
will throw error if CUDA is not available.
- Switched back to install libtorch inside gotch library as go init() function is triggered after cgo called.
- #4 Automatically download and install Libtorch and setup environment variables.
- #6: Go native tensor print using
fmt.Formatter
interface. Now, a tensor can be printed out like:fmt.Printf("%.3f", tensor)
(for float type)
- nn/sequential: fixed missing case number of layers = 1 causing panic
- nn/varstore: fixed(nn/varstore): fixed nil pointer at LoadPartial due to not break loop
- Changed to use
map[string]*Tensor
atnn/varstore.go
- Changed to use
*Path
argument ofNewLayerNorm
method atnn/layer-norm.go
- Lots of clean-up return variables i.e. retVal, err
- Updated to Pytorch C++ APIs v1.7.0
- Switched back to
lib.AtoAddParametersOld
as theato_add_parameters
has not been implemented correctly. Using the updated API will cause optimizer stops working.
- Convert all APIs to using Pointer Receiver
- Added drawing image label at
example/yolo
example - Added some example images and README files for
example/yolo
andexample/neural-style-transfer
- Added
tensor.SaveMultiNew
- Reverse changes #10 to original.
- #10:
ts.Drop()
andts.MustDrop()
now can call multiple times without panic