- Fixed the UI of GA.
- Greatly optimized GA (about four times faster).
- Fixed Minibatch_GD and GA.
- Slightly optimized GA.
- Added ResNet as a network type.
- Added Residual as a layer to load ResNet.
- Added TConv2D as a layer.
- Added Dropout as a layer.
- Updated dependencies that solves the problem of MaxPooling.
- Updated dependencies.
- Improved the speed of MaxPooling2D before it hangs :(.
- Rewrote the structure for better performence and more logical principles.
- Added Flatten and Constructive as layers to reform data shape.
- Updated model management.
- Rewrote UI and simplified it.
- Renamed to DianoiaML.jl.
- Support Genetic Algorithm officially.
- BLAS is now running with a single thread in default.
- Fixed a few bugs.
- Updated the API and it can now auto-complete most of the parameters.
- Added padding and biases in Conv2D.
- Added UpSampling2D as a layer.
- Added Genetic Algorithm as an experimental optimizer.
- Fixed a few bugs.
- Fixed that the speed of convergence for training convolutional networks is slower as expected. (Except with Minibatch_GD, I suppose this optimizer is just inefficient in this case. )
- Support HDF5.jl 0.15.4
- Added a limit of interval (-3.0f38, 3.0f38) for value to prevent overflow and NaN.
- GAN can now display the loss of discriminator.
- Sightly improved the sytax.
- Added GAN as a new type of networks.
- Split Cross_Entropy_Loss into Categorical_Cross_Entropy_Loss and Binary_Cross_Entropy_Loss.
- Fixed the loss display of AdaBelief.
- Known issues: there is a possibility to produce NaN, I am still working on it. For now, reduce the usage of ReLU in relatively deep networks may solve the problem.
- Fixed that AdaBelief activates Adam.
- Optimized the structure for development of GAN model in the future.
- Updated the argument keywords for
fit
function.
- Greatly improved the training speed by optimizing the structure.
- Fixed that the filters in Conv2D cannot be updated until saved.
- Fixed that the model cannot by trained multiple times.
- Added SGD as an optimizer.
- Optimized the structure and sytax, the "minibatch" problem is now solved.
- Accelerated the framework by using LoopVectorization.jl.
- Use GlorotUniform to generate random weights and biases.
- Added Monitor to show the current loss
- Known issues: I find out that all my optimizers update once after a batch, that means they work just like Minibatch Gradient Descent, so Adam and AdaBelief are not working properly but like Minibatch Adam and Minibatch AdaBelief. This slows down the training process. I will try to reconstruct the whole program in the next update.
- Greatly improved the training speed.
- In the example, it is about 20 seconds slower than Keras (epochs=5, batch_size=128).
- Added Convolutional2D and MaxPooling2D as layers.
- Added Mean Squared Loss as a loss function.
- Added Adam and AdaBelief as optimizers.
- Added One Hot and Flatten as tools.
- Improved the structures.
- Known issues: Convolutional2D requires a lot of RAM and is relatively slow.