Skip to content
Tomas Ukkonen edited this page Nov 2, 2021 · 151 revisions

Developed between 2002-2004, 2013-2021. Picture: GB-RBM reconstructed data (red) from examples (green). Variance/probability distribution of the input data is approximated more or less correctly.

GB-RBM reconstructed data (red) from examples (green)

Machine learning component contains:

  • unix commands line tools to do machine learning from scripts (nntool, dstool)
  • C++ implementation (CPU) means the library should be better than Python or Java in real-time and stand-alone applications (without GPU)
  • deep learning works reasonable well with nntool (test_titanic.sh and leaky ReLU non-linearity, test_data_residual.sh uses a 20-40 layer residual neural network [20 layers work perfectly with a simple test problem, 40 layers still give reasonable results]).
  • There is partial support for complex valued neural networks.
  • multistart L-BFGS neural network 2nd order optimizer
  • drop out heuristics for deep learning code
  • basic BayesianNeuralNetwork using hybrid monte carlo sampling
  • principal component analysis (PCA) preprocessing method including symmetric eigenvalue decomposition code (eig) and linear ICA
  • Hidden Markov Model (HMM) learning
  • Variational Autoencoder code (VAE.h / VAE.cpp)
  • [EXPERIMENTAL CODE]: restricted boltzmann machines (RBM): GB-RBM & BB-RBM. (DBN.h / GBRBM.h / BBRBM.h)
  • [EXPERIMENTAL CODE]: basic reinforcement learning code (discrete/continuous actions, continuous states)
  • [EXPERIMENTAL CODE]: parallel tempering sampling
  • [EXPERIMENTAL CODE]: recurrent neural network code and code using BB-RBM output (RNN-RBM) and BPTT algorithm
  • datamining frequent sets from data
  • multi-threaded (multicore CPUs) code to support parallel search/optimization (std::threads and OpenMP)
  • uses BLAS interface for matrix/vector math which is optimized for speed (OpenBLAS, Intel MKL)
  • supports CSV ascii file import/export
  • t-SNE dimension reduction, SOM self-organizing map (slow), simplex optimization, 4th order Runge-Kutta numerical ODE solver
  • quaternion, 3d matrix rotation and other codes

Additionally, the library contains basic linear algebra and various other general computer science algorithms like radix sorting (for floating point numbers through conversion code O(N)), hermite curve interpolation and cryptography (AES, DES, Triple-DES, RSA..), arbitrary precision numbers etc. The compiler used is GCC https://gcc.gnu.org/ and the build process uses GNU Makefile and autoconf. The library also uses C++11 features meaning that only a new GCC C++ compiler is likely to work. The project also compiles using Clang/LLVM compiler but is not fully tested using Clang.

Since April 2017 there are ~100.000 lines of C++ code in the library.

Compiles both on Linux and on Windows (GCC/Linux, MSYS2/MINGW).

GCC >= 9.x compiler is needed because GCC 8.x compiler seem to have a bug in class random_device.

NOTE: Please use experimental branch (RBM_test) if possible as the main branch is not actively maintained.

LICENSE

Releases are distributed using GNU GPL license. Links with BSD/LGPL libraries (zlib, gmp and blas).

Author (Tomas Ukkonen) keeps full rights to use library in anyway he wants in the future including releases under different licenses.

PLAN

Implement more modern algorithms (C++):

  • add cuBLAS support for accelerated BLAS routines (current experimental CUDA implementation is very slow (continuous data transfers between CPU and GPU))
  • convolutional (sparse) neural networks
  • LSTM RNNs

DOCUMENTATION

class nnetwork<>

class dataset<>

class NNGradDescent<>

class LBFGS<>

class LBFGS_nnetwork<> (neural network LBFGS optimizer)

class HMC<> (parallel hamiltonian monte carlo sampling for neural networks)

tools/nntool

tools/dstool

are the primary classes and tools to look at and use. Also read HowToLearnFromData page. Those interested in pretraining weights using RBMs, read docs/RBM_notes.tm and src/neuralnetwork/tst/test.cpp: rbm_test() and look at classes:

class GBRBM<>

class BBRBM<>

class DBN<>

It is possible to convert layer-by-layer optimized DBN into feedforward supervised neural network.

For mathematical documentation, read: RBM notes, bayesian neural network notes and recurrent neural network gradient notes.

Recurrent neural networks are activated using --recurrent DEPTH in nntool. For more information read:

class rLBFGS_nnetwork<>

which trains basic recurrent nnetwork using LBFGS. For examples, look at src/neuralnetwork/tst/test.cpp.

TODO

  • cleanup the code and remove classes that are not needed and don't belong to this library
  • try to remove few direct x86 dependencies (cpuid instruction)
  • PROPER DOCUMENTATION (these wiki pages are also slightly out of date)
  • small things: write bash autocomplete scripts for nntool and dstool

PLATFORMS

The software has been developed on Linux (x86 and amd64). It also compiles using MINGW-w64 64bit/Windows compiler (MSYS2 environment). It has been reported to work correctly in Cygwin 32bit environment (but cygwin license is a problem). Currently there are few x86 specific instructions (CPUID) and assumptions (size of int and float etc) so only x86 and amd64/x86_64 are fully supported.

CONTACT DETAILS

Tomas Ukkonen [email protected]

GitHub is now (2015) the official location of this project.