Releases: marcpinet/neuralnetlib
Releases · marcpinet/neuralnetlib
neuralnetlib 4.3.4
- feat+fix(gan+compiles): metrics in compile + fixed labels in GAN
- refactor(gan): evaluate
- fix(gan): metrics handling
- fix(mmd): numerical instability
- refactor(gan): some improvements
- fix(dropout): input shape for gans
- feat(gan): add adaptive plot sample gen according to activation function
- docs(notebook): update gan example
- docs(readme): update transformer description
- fix(cgan): conditional handling in G/D
- fix(lr): add gan support
- feat: add balanced batch sampling
- fix(docs): notebook example for gan
- build: correct version
- fix: beam search
neuralnetlib 4.3.3
- fix(configs): layers states saving and loading
- fix(configs): losses states saving and loading
- fix(configs): optimizers states saving and loading
- fix(configs): models states saving and loading
- ci: bump version to 4.3.3
neuralnetlib 4.3.2
- docs: update readme
- fix(dropout/batchnorm): save&load
- ci: bump version to 4.3.2
neuralnetlib 4.3.1
- fix(batchnorm): init
- ci: bump version to 4.3.1
neuralnetlib 4.3.0
- fix(git): file caching
- feat: add support for multi-label classification
- fix(cgan): y_train when None
- feat(cgan): add label smoothing
- fix(cgan): predictions
- docs(gans): update
- docs(notebook): add svm
- feat(utils): add make_blobs
- feat(utils): add make_classification
- feat(metrics): add adjusted_rand_score
- feat(metrics): add adjusted_mutual_info_score
- fix(metrics): MAE, MSE, MAPE
- refactor: usage of standalone functions everywhere
- ci: bump version to 4.3.0
neuralnetlib 4.2.0
- docs(notebook): fresh run²
- docs(readme): update quick examples
- docs(readme): update²
- docs: add conv example for gan
- feat(layer): add Conv2DTranspose
- docs: add conv example for gan using conv2dtranspose
- feat(GAN): add Conditional GAN (CGAN)
- ci: bump version to 4.2.0
neuralnetlib 4.1.0
- fix(LSTM): huge improvements in gradient flow
- feat(gradient_norm): better batch handling
- fix(Callbacks): now fully working generically
- fix(metrics): val_loss
- fix(Embedding): bias in output
- fix(display): val_*
- feat(metric): add pearsonr
- feat(metric): add kurtosis
- feat(metric): add skew
- fix(metric): skewness and kurtosis when variance=0
- fix(gan): batch logs
- fix(activation): config loading
- feat(lstm): improved flexibility and parameterizing
- docs(notebook): fresh run
- ci: bump version to 4.1.0
neuralnetlib 4.0.7
- fix(LSTM): huge improvements in gradient flow
- feat(gradient_norm): better batch handling
- fix(Callbacks): now fully working generically
- fix(metrics): val_loss
- fix(Embedding): bias in output
- fix(display): val_*
- feat(metric): add pearsonr
- feat(metric): add kurtosis
- feat(metric): add skew
- fix(metric): skewness and kurtosis when variance=0
- fix(gan): batch logs
- fix(activation): config loading
- feat(lstm): improved flexibility and parameterizing
- docs(notebook): fresh run
- ci: bump version to 4.0.7
neuralnetlib 4.0.6
- refactor: autopep8 format
- ci: bump version to 4.0.6
neuralnetlib 4.0.5
- fix(cosine_sim): division by zero
- refactor: remove the use of enums
- docs: huge update and new examples
- docs(readme): update hyperlinks
- feat: add SVM
- fix(Dense): bad variable init
- fix(Tokenizer): bpe tokenizing
- fix: jacobian matrix instead of approximation
- fix(AddNorm): backward pass
- ci: bump version to 4.0.4